Skip to main content

Full text of "Advanced Stochastic Processes Part I"

See other formats


I bookboon.com 


Advanced stochastic processes: 
Part I 


Jan A. Van Casteren 



Download free books at 

book boon. com 













Jan A. Van Casteren 

Advanced Stochastic Processes 

Part I 


ii 


Download free eBooks at bookboon.com 



Advanced stochastic processes: Part I 
2 nd edition 

© 2015 Jan A. Van Casteren & bookboon.com 
ISBN 978-87-403-1115-0 


The author is obliged to Department of Mathematics and Computer Science of the University of Antwerp 
for its material support. The author is also indebted to Freddy Delbaen (ETH, Zurich), and the late 
Jean Haezendock (University of Antwerp). A big part of Chapter 5 is due to these people. The author is 
thankful for comments and logistic support by numerous students who have taken courses based on this 
text. In particular, he is grateful to Lieven Smits and Johan Van Biesen (former students at the University 
of Antwerp) who wrote part of the Chapters 1 and 2. The author gratefully acknowledges World Scientic 
Publishers (Singapore) for their permission to publish the contents of Chapter 4 which also makes up 
a substantial portion of Chapter 1 in [144]. The author also learned a lot from the book by Stirzaker 
[124]. Section 1 of Chapter 2 is taken from [124], and the author is indebted to David Stirzaker to allow 
him to include this material in this book. The author is also grateful to the people of Bookboon, among 
whom Karin Hamilton Jakobsen and Ahmed Zsolt Dakroub, who assisted him in the final stages of the 
preparation of this book. 


iii 


Download free eBooks at bookboon.com 



Advanced stochastic processes: Part I 


Contents 


Contents 


Preface i 

Chapter 1. Stochastic processes: prerequisites 1 

1. Conditional expectation 2 

2. Lemma of Borel-Cantelli 9 

3. Stochastic processes and projective systems of measures 10 

4. A definition of Brownian motion 16 

5. Martingales and related processes 17 

Chapter 2. Renewal theory and Markov chains 35 

1. Renewal theory 35 

2. Some additional comments on Markov processes 61 

3. More on Brownian motion 70 

4. Gaussian vectors. 76 

5. Radon-Nikodym Theorem 78 

6. Some martingales 78 



www.sylvania.com 


We do not reinvent 
the wheel we reinvent 
light. 


Fascinating lighting offers an infinite spectrum of 
possibilities: Innovative technologies and new 
markets provide both opportunities and challenges. 
An environment in which your expertise is in high 
demand. Enjoy the supportive working atmosphere 
within our global group and benefit from international 
career paths. Implement sustainable ideas in close 
cooperation with other specialists and contribute to 
influencing our future. Come and join us in reinventing 
light every day. 


OSRAM 

SYLVAN!A 


Light is OSRAM 


4 

Download free eBooks at bookboon.com 










Advanced stochastic processes: Part I 


Contents 


Chapter 3. An introduction to stochastic processes: Brownian motion, 

Gaussian processes and martingales 89 

1. Gaussian processes 89 

2. Brownian motion and related processes 98 

3. Some results on Markov processes, on Feller semigroups and on the 

martingale problem 117 

4. Martingales, submartingales, supermartingales and semimartingales 147 

5. Regularity properties of stochastic processes 151 

6. Stochastic integrals, Ito’s formula 162 

7. Black-Scholes model 188 

8. An Ornstein-Uhlenbeck process in higher dimensions 197 

9. A version of Fernique’s theorem 221 

10. Miscellaneous 223 



Deloitte 




Discover the truth at www.deloitte.ca/careers 


© Deloitte & Touche LLP and affiliated entities. 



Download free eBooks at bookboon.com 



Advanced stochastic processes: Part I Contents 

Chapter 4. Stochastic differential equations 243 

1. Solutions to stochastic differential equations 243 

2. A martingale representation theorem 272 

3. Girsanov transformation 277 

Chapter 5. Some related results 295 

1. Fourier transforms 295 

2. Convergence of positive measures 324 

3. A taste of ergodic theory 340 

4. Projective limits of probability distributions 357 

5. Uniform integrability 369 

6. Stochastic processes 373 

7. Markov processes 399 

8. The Doob-Meyer decomposition via Komlos theorem 409 

Subjects for further research and presentations 423 

Bibliography 425 

Index 433 


SIMPLY CLEVER 


SKODA 



We will turn your CV into 
an opportunity of a lifetime 


Do you like cars? Would you like to be a part of a successful brand? 
We will appreciate and reward both your enthusiasm and talent. 
Send us your CV. You will be surprised where it can take you. 


6 


Send us your CV on 

www.employerforlife.com 



Download free eBooks at bookboon.com 











Advanced stochastic processes: Part I 


Preface 


Preface 


This book deals with several aspects of stochastic process theory: Markov 
chains, renewal theory, Brownian motion, Brownian motion as a Gaussian pro¬ 
cess, Brownian motion as a Markov process, Brownian motion as a martingale, 
stochastic calculus, Ito’s formula, regularity properties, Feller-Dynkin semi¬ 
groups and (strong) Markov processes. Brownian motion can also be seen as 
limit of normalized random walks. Another feature of the book is a thorough 
discussion of the Doob-Meyer decomposition theorem. It also contains some 
features of stochastic differential equations and the Girsanov transformation. 
The first chapter (Chapter 1) contains a (gentle) introduction to the theory 
of stochastic processes. It is more or less required to understand the main 
part of the book, which consists of discrete (time) probability models (Chap¬ 
ter 2), of continuous time models, in casu Brownian motion, Chapter 3, and 
of certain aspects of stochastic differential equations and Girsanov’s transfor¬ 
mation (Chapter 4). In the final chapter (Chapter 5) a number of other, but 
related, issues are treated. Several of these topics are explicitly used in the 
main text (Fourier transforms of distributions, or characteristic functions of 
random vectors, Levy’s continuity theorem, Kolmogorov’s extension theorem, 
uniform integrability); some of them are treated, like the important Doob-Meyer 
decomposition theorem, but are not explicitly used. Of course Ito’s formula 
implies that a C 2 -function composed with a local semi-martingale is again a 
semi-martingale. The Doob-Meyer decomposition theorem yields that a sub¬ 
martingale of class (DL) is a semi-martingale. Section 1 of Chapter 5 contains 
several aspects of Fourier transforms of probability distributions (characteristic 
functions). Among other results Bochner’s theorem is treated here. Section 
2 contains convergence properties of positive measures. Section 3 gives some 
results in ergodic theory, and gives the connection with the strong law of large 
numbers (SLLN). Section 4 gives a proof of Kolmogorov’s extension theorem 
(for a consistent family of probability measures on Polish spaces). In Section 
5 the reader finds a short treatment of uniform integrable families of functions 
in an iP-space. For example Scheffe’s theorem is treated. Section 6 in Chapter 
5 contains a precise description of the regularity properties (like almost sure 
right-continuity, almost sure existence of left limits) of stochastic processes like 
submartingales, Levy processes, and others; it also contains a proof of Doob’s 
maximal inequality for submartingales. Section 7 of the same chapter contains 
a description of Markov process theory starting from just one probability space 
instead of a whole family. The proof of the Doob-Meyer decompositon theorem 
is based on a result by Komlos: see Section 8. Throughout the book the reader 
will be exposed to martingales, and related processes. 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Preface 


Readership . From the description of the contents it is clear that the text 
is designed for students at the graduate or master level. The author believes 
that also Ph.D. students, and even researchers, might benefit from these notes. 
The reader is introduced in the following topics: Markov processes, Brownian 
motion and other Gaussian processes, martingale techniques, stochastic differ¬ 
ential equations, Markov chains and renewal theory, ergodic theory and limit 
theorems. 


ii 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


CHAPTER 1 

Stochastic processes: prerequisites 


In this chapter we discuss a number of relevant notions related to the theory 
of stochastic processes. Topics include conditional expectation, distribution of 
Brownian motion, elements of Markov processes, and martingales. For com¬ 
pleteness we insert the definitions of a a-field or cx-algebra, and concepts related 
to measures. 

1.1. Definition. A cx-algebra, or cx-field, on a set is a subset A of the power 
set IP (Q) with the following properties: 

(i) £ A; 

(ii) A e A implies A c : = U\A e A; 

00 

(iii) if {A n ) n>1 is a sequence in A, then A n belongs to A. 

n — 1 

Let A be a u-field on Q. Unless otherwise specified, a measure is an application 
p, : A —> [0, oo] with the following properties: 

• h (0) = 0; 

• if (A n ) n>l is a mutually disjoint sequence in A, then 

( oo \ N oo 

(jA n j= p (A n ) = p (A n ). 

n —1 / iV— kx)j = 1 n — 1 

If /./ is measure on A for which // (Q) = 1, then /.i is called a probability measure; 
if n (fl) ^ 1, then p is called a sub-probability measure. If fi : A —*■ [0,1] is a 
probability space, then the triple (U, A, fi) is called a probability space, and the 
elements of A are called events. 

Let M be a collection of subsets of CP (Q), where Q is some set like in Definition 
1.1. The smallest a-field containing M is called the cr-field generated by M, and 
it is often denoted by a (M). Let (U, A,/i) be a sub-probability space, i.e. ji is 
a sub-probability on the a-field A. Then, we enlarge 0 with one point A, and 
enlarge A to 

A a := ct (A u {A}) = {A e CP (U A ) : A n f2 e A} . 

Then /r A : A a —> [0,1], defined by 

/r A (A) = ju (A n f2) + (1 — /i (U)) 1^ (A) , AgA a , (1.1) 

turns the space (0 A ,A A ,/i A ) into a probability space. Here fl A = Slu {A}. 
This kind of construction also occurs in the context of Markov processes with 


1 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


finite lifetime: see the equality (3.75) in (an outline of) the proof of Theorem 
3.37. For the important relationship between Dynkin systems, or A-systems, 
and cr-algebras, see Theorem 2.42. 

1. Conditional expectation 

1.2. Definition. Let (fi,Al,P) be a probability space, and let A and B be 

P (A n B) 

events in A such that P [ B] > 0. The quantity P (A | B) = — ^ - is then 

called the conditional probability of the event A with respect to the event B. 
We put P (A | B) = 0 if P ( B ) = 0. 

Consider a finite partition {B 1; ..., B n } of tt with Bj e A for all j = 1,..., n , 
and let ‘B be the subfield of A generated by the partition {B L ,..., B n }, and 
write 

n 

V[A\H}-Y 1 V(A\B j )l Bl . 

3 = 1 

Then P \A | !B] is a “B-measurable stochastic variable on Q. and 

f F[A\‘B]dF= f l A dF for all B e £. 

Jb jb 




MAERSK 


I joined MITAS because 
I wanted real responsibility 


The Graduate Programme 
for Engineers and Geoscientists 

www.discovermitas.com 


Real work 
International opportunities 
Three work placements 




a 


I was a construction 
supervisor in 
the North Sea 
advising and 
helping foremen 
solve problems 


2 

Download free eBooks at bookboon.com 









Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


Conversely, if / is a B-measurable stochastic variable on 11 with the property 
that for all B e B the equality fdP = 1, 4 dP holds, then f = P [-4 | B] 

P-almost surely. This is true, because (/ — P [A | B]) dP = 0 for all B e B. 
If B is a sub-field (more precisely a sub-cr-field, or sub- a- algebra) generated by 
a finite partition of Q. then for every A e A there exists one and only one class 
of variables in L 1 (Q, B, P), which we denote by P [A | B], with the following 
property 

I P [A | B] dP = I l A dP for all Be B. 

JB JB 

The variable Xp=i ^ iA \ Bj) Ir, is an element from the class P [^4 | B]. 

If we fix B e A with P ( B ) > 0, then the measure A >—> P (A | B) is a probability 
measure on (f2, A). If P ( B ) = 0, then the measure A >—> P (A | B) is the zero- 
measure. 


Let X be a P-integrable real or complex valued stochastic variable on Cl. Then 
X is also P (• | B)-integrable, and 

J XdP (- 1 B) = E ^ b] , provided P (B) > 0. 

This quantity is the average of the stochastic variable over the event B. As 
before, it is easy to show that if B is a subfield of A generated by a finite 
partition {Bi,...,B n } of Q, then there exists, for every P-integrable real or 
complex valued stochastic variable X on fl one and only one class of functions 
in L 1 (f2, B, P), which we denote by E [X | B] with the property that 

f E [X | B] dP = f XdP for all B e B. 

JB JB 

The variable 2y=i $ATdP (• | Bj) 1b 3 is an element from the class E [X | B]. 

The next theorem generalizes the previous properties to an arbitrary subfield 
(or more precisely sub-a-field) B of A. 

1.3. Theorem (Theorem and definition). Let (f2,A,P) be a probability space 
and let B be a subfield of A. Then for every stochastic variable XeL 1 (0, A, P) 
there exists one and only one class in L 1 (fi, B, P), which is denoted by E [X | B] 
and which is called the conditional expectation of X with respect to B, with the 
property that 

f E [X | B] dP = f XdP for all BeB. 

JB JB 


If X = 1^, with A e A, then we write P [A | B] instead of E [l^ | B] ; if B 
is generated by just one stochastic variable Y , then we write E [X | T] and 
P [A | y] instead of respectively E [X | a (y)] and P [A | a (y)]. 

Proof. Suppose that X is real-valued; if X = Re X + ilm X is complex¬ 
valued, then we apply the following arguments to Re X and Im X. Upon writ¬ 
ing the real-valued stochastic variable X as X = X + — X~, where X ± are 


3 


Download free eBooks at bookboon.com 



Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


non-negative stochastic variables in L 1 (0. A. P), without loss of generality we 
may and do assume that X ^ 0. Define the measure /j : A —» [0, go) by 

n(A) = XdF , A e A. Then /i is finite measure which is absolutely contin- 

Ja 

uous with respect to the measure P. We restrict // to the measurable space 
(f2, B); its absolute continuity with respect to P confined to (f2, B) is preserved. 
From the Radon-Nikodym theorem it follows that there exists a unique class 
Y e L 1 2 (0. B,P) such that, for all Be B, the following equality is valid: 

p(B) = I YdF, and hence | XdF = I YdF. 

Jb Jb Jb 

This proves Theoreml.3. □ 


If B is generated by a countable or finite partition {Bj : j e N}, then it is fairly 
easy to give an explicit formula for the conditional expectation of a stochastic 
X e L l (fil,A,F) with respect to B: 


k *<& 


E \X I Si = V--—- 

1 1 J U p w) 


1b,. 

jeN 


Next let B be an arbitrary subfield of A, let X belong to L 1 (fi, A.F), and let 
B be an atom in B. The latter means that P (B) > 0, and if A e B is such 
that A cz B, then either P (A) = 0 or P (B\A) = 0. If Y represents E [X | B], 
then YIb = bis, P-almost surely, for some constant b. This follows from the 
B-measurability of the variable Y together with the fact that B is an atom for 
(0, B, P). So we get $ B XdF = E [X | B] dF = J Yl B dF = bF (B), and hence 


b = 


P(B) 


Consequently, on the atom B we have: 


r . , LM 

E \X Bl = -^—r-zrr— = b , 
111 PB 


P-almost surely. 


In particular, for X = 1 a, we have on the atom B the equality 

P(AnB) 


F[A\‘B] = 


P (B) 


P-almost surely. 


If B is not an atom, then the conditional expectation on B need not be constant. 


In the following theorem we collect some properties of conditional expectation. 
For the notion of uniform integrability see Section 5. 

1.4. Theorem. Let (Q,.A, P) be a probability space, and let B be a subfield of 
A. Then the following assertions hold. 


(1) If all events in B have probability 0 or 1 (in particular if B is the 
trivial field {0,Q}), then for all stochastic variables X e L 1 (Q,.A,P) 
the equality E [X | B] = E (X) is true P -almost surely. 

(2) If X is a stochastic variable in L 1 (fl,/l,P) such that B and cr(X) 
are independent, then the equality E [X | B] = E (X) is true P -almost 
surely. 


4 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


(3) If a and b are real or complex constants, and if the stochastic variables 
X and Y belong to L 1 (14, .A, P), then the equality 

E [aX + bY | B] = oE [X | B] + ME [Y \ B] is true P -almost surely. 

(4) If X and Y are real stochastic variables in L l (14, A., P) such that X < 
Y, then the inequality E [X | B] < E [F | B] holds P -almost surely. 
Hence the mapping X >—► E [X | B] is a mapping from L 1 (14, A, P) 
onto L 1 (14, B, P). 

(5) (a) If (X n : n e N) is a non-decreasing sequence of stochastic variables 

in L 1 (14, A , P), then 


sup E [X n | B] = E sup X n | B 

n n 


P-a/most surely. 


(b) If (X n : n e N) is any sequence of stochastic variables in 

L 1 (14, A, P) which converges P -almost surely to a stochastic vari¬ 
able X, and if there exists a stochastic variable Y e L 1 (14, A, P) 
such that \X n \ ^ Y for all neN, then 


lim E |X I Bl 

n—»oo L 1 J 


E 


lim X n 

_n—► oo 



P -almost surely, and in L 1 (14, B, P). 


The condition ‘\X n \ < Y for all n e N with Y e L 1 (14,/l,P)” 
may be replaced with “the sequence (X n ) neN is uniformly integrable in 
the space L 1 (f2,.A, P) ” and still keep the second conclusion in (5b). 
In order to have P -almost sure convergence the uniform integrability 
condition should be replaced with the condition 


^ in£. ^supE [|X n |, |X n | > M | B] = 0, P -almost surely. (1.2) 


(6) If c(x) is a convex continuous function from M to M, and if X belongs 
to L 1 (14,X,P), then 

c (E [X | B]) < E [c(X) | B] , P -almost surely. 

(7) Let p ^ 1, and let X be a stochastic variable in L p P). Then the 
stochastic variable E [X | B] belongs to L p (Q, B,P), and 

||E [X | B]|| p < ll^llp ■ 

So the linear mapping X i— > E [X | B] is a projection from LL (14, A , P) 
onto LL (14, B, P). 

(8) (Tower property) Let B' be another subfield of A such that B c B' c A. 
If X belongs to L 1 (14, X, P), then the equality 

E [E [X | B'] | B] = E [X | B] holds P -almost surely. 

(9) If X belongs to L 1 (14,B,P), then E [X | B] = X, P -almost surely. 

(10) If X belongs to L 1 (14, X, P), and if Z belongs to L 00 (14,B,P), then 

E [ZX | B] = BE [X | B] , P -almost surely. 

(11) If X belongs to L 2 (14,X,P), then E [Y (X - E (X | B])] = 0 for all 
Y e L 2 (14, B,P). Hence, the mapping X E [X | B] is an orthogonal 
projection from L 2 (14, A, P) onto L 2 (14, B, P). 


5 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


Observe that for B the trivial cx-field, i.e. B = {0, 0}, the condition in (1.2) is 
the same as saying that the sequence (X n ) n is uniformly integrable in the sense 
that 

inf supETlXJ , \X n \ > Ml = 0. (1.3) 

Proof. We successively prove the items in Theorem 1.4. 

(1) For every Be '.B we have to verify the equality: 

I XdP = I E (X)tflP. 

Jb Jb 

If P (B) = 0, then both members are 0; if P ( B) = 1, then both mem¬ 
bers are equal to E(X). This proves that the constant E (X) can be 
identified with the class E [X | B]. 

(2) For every B e B we again have to verify the equality: XdP = 

E (X) dP. Employing the independence of X and B e B this can be 
seen as follows: 

f X dP = f X1 B dP = E [X1 B ] = E [X] E [1 b ] = f E [X] dP. (1.4) 

Jb Jq Jb 




Because achieving your dreams is your greatest challenge. IE Business School’s Master in Management taught in English, 
Spanish or bilingually, trains young high performance professionals at the beginning of their career through an innovative 
and stimulating program that will help them reach their full potential. 

Choose your area of specialization. 

Customize your master through the different options offered. 

Global Immersion Weeks in locations such as London, Silicon Valley or Shanghai. 

Because you change , we change with you . 


www.ie.edu/master-management mim.admissions@ie.edu f # In YwTube ii 


Master in Management • 


6 

Download free eBooks at bookboon.com 













Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


(3) This assertion is clear. 

(4) This assertion is clear. 

(5) (a) For all B e “B and neNwe have E \X n | B] dF = X n dF. By 

(4) we see that the sequence of conditional expectations 

E [X n | , neN, increases P-almost surely. 

The assertion in (5a) then follows from the monotone convergence 
theorem. 

(b) Put X* = sup k>n X k , X** = inf k ^ n X k . Then we have — Y < 
X** < X n < X* < Y, P-almost surely. Moreover, the sequences 
(Y — X*) ngN and (Y + -T**) neN are increasing sequences consisting 
of non-negative stochastic variables with Y — lim sup n ^ 00 X n and 
Y + lim inf n ^.oo X n as their respective suprema. Since the sequence 
(X n ) ngN converges P-almost surely to X, it follows by (5a) together 
with (4) that 

E [X** | ®] t E [X** | ®] and E [X* \ 3] j E [X** \ T>]. 


7 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


From the pointwise inequalities X** < X n < X* it then follows 
that lim E [X n 1B] = E [X | B], P-almost surely. Next let the 

uniformly integrable sequence ( X n ) n in L 1 (Q, A, P) be pointwise 
convergent to X. Then lim E[|X n — X|] = 0. What we need is 

n—>00 

that 

lim E \\X n — XI I 23] = 0. (1.5) 

n^oo L i J 

Under the extra hypothesis (1.2) this can be achieved as follows: 
lim sup E [|X n — X| | B] 

n—>00 

< lim sup E [|X n — X|, \X n — X\ < M | B] 

n^cc 

+ lim sup E [\X n - X, \X n - X\ > M\ | B] 

n—>oo 

(apply what already has been proved in (5b), with |X n — X| in¬ 
stead of X n , to the first term) 

< lim sup E [|X n - X, \X n -X\> M\ | B]. (1.6) 

n—> oo 

In (1.6) we let M tend to oo, and employ (1.2) to conclude (1.5). 
This completes the proof of item (5). 

(6) Write c(x ) as a countable supremum of affine functions 

c(x) = sup L n (x), (1.7) 

neN 

where L n (z) = a n z + b n ^ c(z), for all those z for which c(z) < oo, 
i.e. for appropriate constants a n and b n . Every stochastic variable 
L n (X) is integrable; by linearity (see (3)) we have L n (E [X | B]) = 
E [L„ (X) | B], Hence 

L n (E[X | B]) < E[c(X) | B] . 

Consequently, 

c (E [X | B]) = sup L n (E [X | B]) < E [c(X) | B] . 

neN 

The fact that convex function can be written in the form (1.7) can be 
found in most books on convex analysis; see e.g. Chapter 3 in [28]. 

(7) It suffices to apply item (6) to the function c(x) = \x\ p . 

(8) This assertion is clear. 

(9) This assertion is also obvious. 

(10) This assertion is evident if Z is a finite linear combination of indicator 
functions of events taken from The general case follows via a limiting 
procedure. 

(11) This assertion is clear if Y is a finite linear combination of indicator 
functions of events taken from The general case follows via a limiting 
procedure. 

The proof of Theorem 1.4 is now complete. □ 


8 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


2. Lemma of Borel-Cantelli 

1.5. Definition. The limes superior or upper-limit of a sequence (A n ) neN in 
a universe is the set A of those elements w e 0 with the property that oj 
belongs to infinitely many A n ’s. In a formula: 

A = lim sup A n = P| (J A k . 

n—>0 ° neN k^n 

The indicator-function 1 a of the limes-superior of the sequence (A n ) n6N is equal 
to the lim sup of the sequence of its indicator-functions: 1 a = lim sup 1 A n ■ 

n —>oo 

The limes inferior or lower-limit of a sequence (A n ) neN in a universe Q is the set 
A of those elements oj e Q with the property that, up to finitely many A As, the 
element (sample) oj belongs to all A n ’s. In a formula: 

A = lim inf A n = [ I P) A k . 

n —>00 1 1 

neN k^n 

The indicator-function T 4 of the limes-inferior of the sequence (A n ) neN is equal 
to the lim inf of the sequence of its indicator-functions: 1 a = lim inf 1 A n - 

71—>00 


"I studied 
English for 16 
years but... 

...I finally 
learned to 
speak it in just 


n 


six lessons 

Jane, Chinese architect 




ENGLISH 


OUT THERE 



Click to hear me talking 
before and after my 
unique course download 


1 


Click on the ad to read more 


Download free eBooks at bookboon.com 









Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


1.6. Lemma. Let (ct n ) nEN be a sequence of real numbers such that 0 ^ a n < 1. 
Then lim n ^oo YIk=i a k < 00 */ and onl V */lim n _oo flLi (1 “ a k) > 0. 


Proof. For 0 < a < 1 the following elementary inequalities hold: 

-—— < log (1 — a) < —a. 

1 — a 


Hence we see 


n 


-Z 


1 — a k 


< log 


rid-«*> 


<k =1 


n 


< - 2 «*• 
fc=l 


The assertion in Lemma 1.6 easily follows from these inequalities. □ 

1.7. Lemma (Lemma of Borel-Cantelli). Let (A n ) ngM be a sequence of events, 
and put A = limsup^^ A n = f| n6 N U k>n A k- 


(i) If En=i P (An) < 00, then P (A) = 0. 

(ii) If the events A n , neN, are mutually ¥-independent, then the converse 
statement is true as well: P (A) < 1 implies Yjk=i P {Ak) < G0 > and 
hence X!*°=i P (A) = go if and only if P ( A ) = 1. 


PROOF, (i) For P (74) we have the following estimate: 

00 

P 0)«infX;P0A (1.8) 

k—n 

Since 2n=i P (A) < °o, we see that the right-hand side of (1.8) is 0. 


(ii) The statement in assertion (ii) is trivial if for infinitely many numbers k the 
equality P (A k ) = 1 holds. So we may assume that for all k e hi the probability 
P (Ak) is strictly less than 1. Apply Lemma 1.6 with a k = P (A k ) to obtain that 
X£Li P (Ak) < oo if and only if 

n n 

0 < lim Ff (1 - P (A k )) = lim Ff P (0\A k ) 

n—>oo A n—>00 A 

k =1 k=l 

(the events (A k ) nen are independent) 

a)=1-P(A). (1.9) 

This proves assertion (ii) of Lemma 1.7. □ 


= lim P 

n—>oo 




<k =1 


= lim P 

n—>oo 


f!\U 


k =1 


3. Stochastic processes and projective systems of measures 

1.8. Definition. Consider a probability space (0,A,P) and an index set I. 
Suppose that for every tel a measurable space (E t , £ t ) and an A-£ t -measurable 
mapping X(t) : Q —> E t are given. Such a family {X(t) : t e I) is called a 
stochastic process. 


10 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


1.9. Remark. The space 0 is often called the sample path space, the space 
E t is often called the state space of the state variable X(t). The a-field A 
is often replaced with (some completion of) the ex-field generated by the state 
variables X(t),t e I. This a-field is written as T. Let (S', S) be some measurable 
space. An T-S measurable mapping Y : Q —*■ S is called an S'-valued stochastic 
variable. Very often the state spaces are the same, i.e. (E t , £*) = (E. £), for all 
state variables X(t), tel. 


In applications the index set I is often interpreted as the time set. So / can 
be a finite index set, e.g. / = { 0 , 1 ,..., n], or an infinite discrete time set, like 
/ = N = {0,1,...} or / = Z. The set I can also be a continuous time set: / = E 
or I = E + = [ 0 , go). In the present text, most of the time we will consider 
I = [ 0 , go). Let / be N, Z, E, or [0,go). In the so-called time-homogeneous or 
stationary case we also consider mappings —*• fl, s e I, s ^ 0 , such that 

X(t) ot} s = X(t + s), E-almost surely. It follows that these translation mappings 
: fl —» fl, s e I, are Sy-Tt-s-measurable, for all t ^ s. If Y is a stochastic 
variable, then Y o t) s is measurable with respect to the cx-field a {X(t) : t^s). 
The concept of time-homogeneity of the process (X(t) : tel ) can be explained 
as follows. Let Y : fl —> E be a stochastic variable; e.g. Y = nj-i/;(*(*;)), 
where fj : E —*• E, 1 ^ j ^ n, are bounded measurable functions. Define the 
transition probability P (s, B ) as follows: P (s, B) = P (X(s) e B), s e I, B e £. 
The measure B >—> E [Y o $ s , X(s) e B] is absolutely continuous with respect to 
the measure B >—> P (s, B), B e £. It follows that there exists a function F(s, x), 
called the Radon-Nikodym derivative of the measure B >—► E [Y o i) s . X(s) e B] 
with respect B >—► P (s, B ), such that E \Y o $ s , I(s)eB] = (s, x) P (s, dx). 

The function F (s,x) is usually written as 


F(s,x) = E[Vo-d s | X(s) e dx] 


E[Vod s , X(s) e dx] 
P[A(s) e dx] 


1.10. Definition. The process (X(t) : tel ) is called time-homogeneous or 
stationary in time, provided that for all bounded stochastic variables Y : fl —> E 
the function 


E[Vod s | X(s) e dx] is independent of s e I, s ^ 0. 

In practice we only have to verify the property in Definition 1.10 for Y of the 
form Y = , where fj : E tj —*• E, 1 ^ j < n, are bounded 

measurable functions. Then Y o t) s = YYj=i fj (tj + s ))- This statement is a 
consequence of the monotone class theorem. 

3.1. Finite dimensional distributions. As above let (Q, A, E) be a prob¬ 
ability space and let {X(t) : t e I] be a stochastic process where each state 
variable X(t) has state space (. E t , £ t ). For every non-empty subset J of I we 
write E J = YiteJ an d £ J = ® te j£ # denotes the product-field. We also write 
Xj = <g)tejX t . So that, if J = {t x ,..., t n ], then Xj = (X (E) ,...,X (t n )). The 
mapping Xj is the product mapping from fl to E J . The mapping Xj : fl —> E J 


11 


Download free eBooks at bookboon.com 



Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


is £l-£ J -measurable. We can use it to define the image measure Pj: 

Pj (B) = XjP (. B ) = P [. Xj l B] = P[wef!: Xj(cj) e B] = P [Xj e B ], 

where B e £ J . Between the different probability spaces (E J , £ J ,Pj) there exist 
relatively simple relationships. Let J and H be non-empty subsets of I such that 
J a H, and consider the £' f/ -£ J -measurable projection mapping pj : E H —> E J , 
which ’’forgets” the ’’coordinates” in H\J. If H = /, then we write pj = pj. 
For every pair J and H with .J H c I we have Xj = pj o Xjj, and hence we 
get Pj (B) = PjFh ( B) = P h e B\, where B belongs to £ J . In particular 

if H = /, then P j (B) = pjF (B) = P [pj e B\, where B belongs to £j. If 
J = {t\,... ,t n } is a finite set, then we have 

Pj [Bi X • • • X B n ] = P [XJ 1 (Bi x • • • x B n )\ 

= P [X (ti) e Bi ,..., X ( t n ) e S n ], 

with e for 1 ^ j ^ n. 

1.11. Remark. If the process {X(£) : t e 1} is interpreted as the movement 
of a particle, which at time £ happens to be in the state spaces E t) and if 
J = {ti,..., £ n } is a finite subset of /, then the probability measure Pj has the 
following interpretation: 


For every collection of sets B\ e £ tl ,..., B n e £ tn the number 
Pj [Si X • • • X £ n ] 

is the probability that at time t\ the particle is in £>i, at time 1 2 it is 
in £> 2 , • •and at time t n it is in B n . 



AACSB 


ACCREDITED 


Excellent Economics and Business programmes 


university of 
groningen 




www.rug.nl/feb/education 


uMSr 

p W 

“The perfect start 
of a successful, 
international career.” 


CLICK HERE 

to discover why both socially 
and academically the University 
of Groningen is one of the best 

places for a student to be 


12 

Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


1.12. Definition. Let fit be the collection of all finite subsets of I. Then the 
family { (E J , £ J . Pj) : J e fit} is called the family of finite-dimensional distri¬ 
butions of the process {X(t) : tel}] the one-dimensional distributions 

{ (Et, £t, Pp}) : t 6 /} 

are often called the marginals of the process. 

The family of finite-dimensional distributions is a projective or consistent family 
in the sense as explained in the following definition. 

1.13. Definition. A family of probability spaces {(E J , £ J ,Pj) : J e fit} is 
called a projective , a consistent system, or a cylindrical measure provided that 

Pj(B)=p* (P H ) (B) = P H [p* e B] 

for all finite subsets J cz H , J , H e fit, and for all sets B e £ J . 

1.14. Theorem (Theorem of Kolmogorov). Let {(E J , £ J ,Pj) : J e fit} be a 

projective system of probability spaces. Suppose that every space E t is a cr - 
compact metrizable Hausdorff space. Then there exists a unique probability space 
(E 1 ,8, 1 with the property that for all finite subsets J e fit the equality 

Pj (L>) = P/ [pj 6 B] holds for all B s £ J . 

Theorem 5.81 is the same as Theorem 1.14, but formulated for Polish and 
Souslin spaces; its proof can be found in Chapter 5. Theorem 1.14 is the same 
as Theorem 3.1. The reason that the conclusion in Theorem 1.14 holds for o- 
compact metrizable topological Hausdorff spaces is the fact that a finite Borel 
measure p on a metrizable cr-compact space E is regular in the sense that 

ji(B) = sup p(K) = inf p(U), B any Borel subset E. (1.10) 

KczB, K compact U^>K, U open 

1.15. Lemma. Let E be a a-compact metrizable Hausdorff space. Then the 
equality in (1.10) holds for all Borel subsets B of E. 


Proof. The equalities in (1.10) can be deduced by proving that the collec¬ 
tion T> define by 


T) = \ B e T> e : sup p{K) = inf p(U) 

KczB U=>B 

= \ B e T>e ■ sup p(F) = inf p(U) 

FczB U^B 


( 1 . 11 ) 


contains the open subsets of E, is closed under taking complements, and is 
closed under taking mutually disjoint countable unions. The second equality 
holds because every closed subset of Li is a countable union of compact subsets. 
In (1.11) the sets K are taken from the compact subsets, the sets U from the 
open subsets, and the sets F from the closed subsets of Li. It is clear that T> is 
closed under taking complements. Let (x,y) d(x,y ) be a metric on E which 


13 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


is compatible with its topology. Let F be a closed subset of E , and define U n 
by 

U n = \ x e E : inf d (x, y) < — > . 

( yeF n) 

Then the subset U n is open, U n+ 1 => U n , and F = f)U n . It follows that // (F) = 
inf n /i (U n ), and consequently, F belongs to T). In other words the collection T 
contains the closed, and so the open subsets of E. Next let (B n ) n be a sequence 
of subsets in T. Fix £ > 0, and choose closed subsets F n a B n , and open 
subsets U n xj B n , such that 


fi (B n \F n ) ^ e2 n \ and /j,(U n \B n )^e 2 n . 
From (1.12) it follows that 




<: 


Uc„ \ 

n=l / \n=l 

oc 






vn=l 




n=l 


n=l 


From (1.13 it follows that 


= inf P<C/) : c/ => [J B n , U openj . 
The same argumentation shows that 

( /oc \ /oc \\ OC 00 

U B " \ U F d H S ^ e 2 2_ ” _1 - 2 e - 

\n=l / \n=l / / n=l n=l 

From (1.15) it follows that 


for N £ large enough. From (1.16 it follows that 

^(Q Sn ) =sup j^ F): Fc u B n , F closed j- . 


( 1 . 12 ) 

(1.13) 

(1.14) 

(1.15) 

(1.16) 

(1.17) 


From (1.14) and (1.17) it follows that (Jn=i belongs to T>. As already men¬ 
tioned, since every closed subset is the countable union of compact subsets the 
supremum over closed subsets in (1.17) may replaced with a supremum over 
compact subsets. Altogether, this completes the proof of Lemma 1.15. □ 


It is a nice observation that a locally compact Hausdorff space is metrizable and 
a -compact if and only if it is a Polish space. This is part of Theorem 5.3 (page 
29) in Kechris [68]. This theorem reads as follows. 

1.16. Theorem. Let E be a locally compact Hausdorff space. The following 
assertions are equivalent: 


14 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


(1) The space E is second countable, i.e. E has a countable basis for its 
topology. 

(2) The space E is metrizable and a-compact. 

(3) The space E has a metrizable one-point compactification (or Alexan¬ 
dra ff compactification). 

(4) The space E is Polish, i.e. E is complete metrizable and separable. 

(5) The space E is homeomorphic to an open subset of a compact metrizable 
space. 


A second-countable locally-compact Hausdorff space is Polish: let (Ui) i be a 
countable basis of open subsets with compact closures ( K i ) i , and let V, be an 
open subset with compact closure and containing Ki. From Urysohn’s Lemma, 
let 0 < fi < 1 be continuous functions identically 0 off ly, identically 1 on Ki, 
and put 


d ( x , y ) = 2 * | fi ( x ) - fi ( y )\ 


+ 


2 = 1 


E£i2 sii2 -*m 


x, y e E. 


(1.18) 

The triangle inequality for the usual absolute value shows that this is a metric. 
This metric gives the same topology, and it is straightforward to verify its 
completeness. For this argument see Garrett [57]. 



American online 

LIGS University 

is currently enrolling in the 
Interactive Online BBA, MBA, MSc, 
DBA and PhD programs: 


► enroll by September 30th, 2014 and 

► save up to 16% on the tuition! 

► pay in 10 installments/2 years 

► Interactive Online education 

► visit www.ligsuniversity.com to 
find out more! 


Note: LIGS University is not accredited by 
nationally recognized accrediting agency 
by the US Secretary of Education. 

More info here. 


15 

Download free eBooks at bookboon.com 














Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


4. A definition of Brownian motion 


In this section we give a (preliminary) definition of Brownian motion. 


4.1. Gaussian measures on M'h For every t > 0 we define the Gaussian 
kernel on W l as the function 


Pd(t,x,y ) 


1 

(2nt) d/2 



2 1 J ' 


Then we have \pd (t, x, z ) dz = 1, and 


Pd(s, x, z)p(t, z, y) = p d (s + t, x, y)p d 


st sx + ty 
s T t s T t 



Hence the function pd (t, x, y) satisfies the equation of Chapman-Kolmogorov: 

J p d (s, x, z)p d (t, z, y)dz =Pd(s + t, x, y). 

This property will enable us to consider d-dimensional Brownian motion as a 
Markov process. Next we calculate the finite-dimensional distributions of the 
Brownian motion. 


4.2. Finite dimensional distributions of Brownian motion. Let 0 < 

t\ < ■ ■ ■ < t n < co be a sequence of time instances in (0, co), and fix xo e W 1 . 


times) by (to = 0) 
Pxo;ti,...,tn [-®1 x ‘ ‘ ‘ x 


e IPxojti, 

t on 

the Borel 

r 

r 

n 

f 

dx n 

n 

i—i 

h 

^3 


>B n 

3 = 1 


j- 1 , Xj- 1 , Xj) , 

(1.19) 


where B \,..., B n are Borel subsets of R . Then, with B & = R , we have 

■PiCo;^lv5^/s-l5^jt5^fc+lr--5^n [^1 ^ ^ ^k — 1 ^ ^ ^k +1 X * ' ' X 

dx n ... dxk+i dxk dxk-i •.. dx\ 


r r / 

JB i JBk—i Jb^+i JB 

k -1 

j "J Pd (tj ~ tj- 1 , Xj- 1 , Xj) 


3 =1 

n 

Pd {tk ^k— 1 5 %k— 1 5 %k) Pd (tk+1 ^k-) Xfo-) Xk+ 1) P (tj tj—\,Xj—\,Xj) 

j=k +2 

(Chapman-Kolmogorov) 

J • • • I I • • • I dx n ... dxk+i dx^—i ... dx\ Pd (tj tj— i, Xj_i, Xj) 

JBi JBk —i J-Bfc+i J-E?n 4=i 


Pd (tk+1 tfe— i, X^_i, J P (tj tj — i , Xj—i , Xj ) 

j=&+2 


16 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


= [BiX ■■■ x B k _i x B k+1 x ■ ■ • x B n ]. (1.20) 

It follows that the family 

< | x • - x R^ , 23 d ® — ® 23 d ,Pa; 0 ; tl) ... )tn J;0 < ti < • • • < t n < oo, n e N >■ 

^ \ n times n times / J 

is a projective or consistent system. Such families are also called cylindrical 
measures. The extension theorem of Kolmogorov implies that in the present 
situation a cylindrical measure can be considered as a genuine measure on the 
product field of Cl : = (R d ) L°' x \ This is the measure corresponding to Brownian 
motion starting at x 0 . More precisely, the theorem of Kolmogorov says that 
there exists a probability space (fi,T, P Xo ) and state variables X(t) : Q —> R d , 
t ^ 0, such that 

P xo [X (ti) eB u ...,X (t n ) e B n ] = P xo;tl ,..., tll [ B i x • • • x B n ], 
where the subsets Bj, 1 < j < n, belong to 23 d . It is assumed that 

P xo [A(0) = x 0 ] = l. 


5. Martingales and related processes 

Let (Q, T, P) be a probability space, and let {dy : t e I } be a family of subfields 
of T, indexed by a totally ordered index set (/, <). Suppose that the family 
{T t : tel} is increasing in the sense that s ^ t implies T s . <= 'J t . Such a family 
of cr-fields is called a filtration. A stochastic process {X(t) : tel}, where X(t), 
tel, are mappings from fl to E t , is called adapted, or more precisely, adapted 
to the filtration {T* : tel} if every X(t) is T t -£rmeasurable. For the cr-field 
Bt we often take (some completion of) the cr-field generated by X(s), s < t: 
= o {A(s) : s ^ t}. 

1.17. Definition. An adapted process {X(t) : t e 1} with state space (R, 23) 
is called a super-martingale if every variable X(t) is P-integrable, and if s ^ t, 
s, t e I, implies E[X(t) | T s ] < X(s), P-almost surely. An adapted process 
{X (t) : tel} with state space (R, 23) is called a sub-martingale if every variable 
X(t) is P-integrable, and if s ^ t, s, t e I, implies E[X(t) | T s ] 5= A(s), P- 
almost surely. If an adapted process is at the same time a super- and a sub¬ 
martingale, then it is called a martingale. 

The martingale in the following example is called a closed martingale. 

1.18. Example. Let X ^ belong to L 1 (f2, £F, P), and let {£F t : te [0, oo)} be a 
filtration in T. Put X(t) = E [AT,*, | £F t ], t ^ 0. Then the process {X(t): t ^ 0} 
is a martingale with respect to the filtration {Tt : t e [0, oo)}. 

The following theorem shows that uniformly integrable martingales are closed 
martingales. 


17 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


1.19. Theorem (Doob’s theorem). Any uniformly integrable martingale 

{X(t):t> 0} in L 1 (fl, T, P) 

converges P -almost surely and in mean (i.e. in L 1 (fl, T, P),) to a stochastic 
variable X^ such that for every t ^ 0 the equality X(t) = E [A* | T t ] holds 
P -almost surely. 

Let F be a subset of L 1 (Q, T, P). Then F is uniformly integrable if for every 
e > 0 there exists a function g e L 1 (Q, T, P) such that t|/|>|p|} I/I ^ ^ £ f° r 
/ e F. Since P is a finite positive measure we may assume that g is a (large) 
positive constant. 

1.20. Theorem. Sub-martingales constitute a convex cone: 

(i) A positive linear combination of sub-martingales is again a sub-martin- 
gale; the space of sub-martingales forms a convex cone. 

(ii) A convex function of a sub-martingale is a sub-martingale. 

Not all martingales are closed, as is shown in the following example. 

1.21. Example. Fix t > 0, and x, y e R d . Let 

{(fi, P*) ,(X(t),t>0),(ti t :t>0), (R n , T> n )} 

be Brownian motion starting at x e W ! , and put, as above, 

Pd(^x,y) = ^^exp 

The process s >—► p (t — s, X(s),y) is P x -martingale on the half-open interval 

[o,*)- 

5.1. Stopping times. A stochastic variable T : —* [0, co] is called a 
stopping time with respect to the filtration {% : t ^ 0}, if for every t ^ 0 the 
event {T < t} belongs to 3y. If T is a stopping time, the process t is 

adapted to {3y : t ^ 0}. The meaning of a stopping is the following one. The 
moment T is the time that some phenomena happens. If at a given time t the 
information contained in £F t suffices to conciude whether or not this phenomena 
occurred before time t, then T is a stopping time. Let 

{(fi, P*), (X(t),t (R n , T> n )} 

be Brownian motion starting at x e R d , let p : R d —> (0, oo) be a strictly positive 
continuous function, and O an open subset of R d . The first exit time from O , 
or the first hitting time of the complement of O, defined by 

T = inf {t > 0 : X(t) e R d \0} 

is a (very) relevant stopping time. The time T is a so-called terminal stopping 
time: on the event {T > s} it satisfies s + T o d s = T. Other relevant stopping 
times are: 

r e = inf jt > 0 : J p(X(s))ds > fj , £ ^ 0. 



18 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


Such stopping times are used for (stochastic) time change: 

r ? + o tf T£ = t£ + v , £, p > 0. 

Note that the mapping £ is the inverse of the mapping t h-> p (X(s)) ds. 

Also note the equality: {r^ < t} = |^p(A(s)) > £j, p(X(s)) ds > £ > 0. 

The mapping £ h-> 7 ^ is strictly increasing from the interval [0, p (X(s')) ds ) 

onto [ 0 ,co). Arbitrary stopping times T are often approximated by “discrete” 
stopping times: T = lim n ^oo T n , where T n = 2~ n [ 2 n T], Notice that T < T n+ 1 < 
T n ^T + 2~ n , and that {T n = k2~ n ) = {(k - l)2~ n < T < k2~ n }, ke N. 

1.22. Theorem. Let (12, T, P) be a probability space, and let {U t : t ^ 0} be a 
filtration in £F. The following assertions hold true: 

(1) constant times are stopping times: for every t ^ 0 fixed the time T = t 
is a stopping time; 

( 2 ) if S and T are stopping times, then so are min (A, T) and max (S', T); 

(3) IfT is a stopping time, then the collection TV defined by 

Tt = {A e T : A n {T < f] e for all t > 0} 
is a subfield of 3y 

(4) If S and T are stopping times, then S+Tods is a stopping time as well, 
provided the paths of the process are P -almost surely right-continuous 
and the same is true for the filtration [T t : t ^ 0 }. 

The filtration {T t : t ^ 0} is right-continuous ifT t = t ^ 0. The (sam¬ 

ple) paths t i—> X(t) are said to be P -almost surely right-continuous, provided 
for all t ^ 0 we have X(t) = lim s p A(s), P -almost surely. 


19 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


The following theorem shows that in many cases fixed times can be replaced 
with stopping times. In particular this is true if we study (right-continuous) 
sub-martingales, super-martingales or martingales. 

1.23. Theorem (Doob’s optional sampling theorem). Let (X(t) : t ^ 0) be a 
uniformly integrable process in L 1 (Q, £F, P) which is a sub-martingale with re¬ 
spect to the filtration (£F t : t ^ 0). Let S and T be stopping times such that 
S < T. Then E [X{T) \ T s ] > X(S), P -almost surely. 

Similar statements hold for super-martingales and martingales. 

Notice that X(T) stands for the stochastic variable u> > X ( T{pj )) (a;) = 
X (T(u),u). 

We conclude this introduction with a statement of the decomposition theorem 
of Doob-Meyer. A process {X(f) : t ^ 0} is of class (DL) if for every t > 0 the 
family 

{X(t) : 0 < r ^ t, t is an (9y) -stopping time} 
is uniformly integrable. An dy-martingale {M(t) : f A 0} is of class (DL), an 
increasing adapted process {A(t) : t ^ 0} in L 1 (f2,T, P) is of class (DL) and 
hence the sum { M(t ) + A(t) : t ^ 0} is of class (DL). If {X(f) : t ^ 0} is a 
submartingale and if p is a real number, then the process {max (X(t), p) : t ^ 0} 
is a sub-martingale of class (DL). Processes of class (DL) are important in the 
Doob-Meyer decomposition theorem. Let (f2,T, P) be a probability space, let 
{Si : t ^ 0} be a right-continuous filtration in J and let {X(t) : t ^ 0} be right 
continuous sub-martingale of class (DL) which possesses almost sure left limits. 
We mention the following version of the Doob-Meyer decomposition theorem. 
See Remark 3.54 as well. 

1.24. Theorem. Let {AT(f) : t ^ 0} be a sub-martingale of class (DL) which 
has P almost surely left limits, and which is right-continuous. Then there ex¬ 
ists a unique predictable right continuous increasing process {A(t) : t ^ 0} with 
A(0) = 0 such that the process {X(t) — A(t ) : t ^ 0} is an T t -martingale. 

A process ( cu,t ) X(t)(co) = X ( t,u >) is predictable if it is measurable with 
respect to the u-field generated by {A x (a, b] : A e T a , a < b}. For more details 
on cadlag sub-martingales, see Theorem 3.77. The following proposition says 
that a non-negative right-continuous sub-martingale is of class (DL). 

1.25. Proposition. Let (f2, T, P) be a probability space, let (Tt) tSs0 be a filtration 
of a-fields contained in T. Suppose that t > X(t) is a right-continuous sub¬ 
martingale relative to the filtration (3y) t>0 attaining its values in [0,oo). Then 
the family {X(t) : t ^ 0} is of class (DL). 

In fact it suffices to assume that there exists a real number m such that X (t) > 
—m P-almost surely. This follows from Proposition 1.25 by considering X{t) +rn 
instead of X(t). 

If t i—> M(t ) is a continuous martingale in L 2 (f2, £F, P), then t <—> |M(f)| 2 is a 
non-negative sub-martingale, and so it splits as the sum of a martingale t >—► 


20 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


\M(t)\ 2 — (M, M) ( t ) and an increasing process t > (M < M) (t), the quadratic 
variation process of M(t). 

Proof of Proposition 1.25. Fix t > 0, and let t : Cl —> [0,£] be a 
stopping time. Let for m e N the stopping time r m : Cl —> [0, go] be defined by 
r m = inf {s > 0 : X(s) > m) if X(s) > m for some s < co, otherwise r m = co. 
Then the event (X(t) > m) is contained in the event {r m < r}. Hence, 

E [X(t) : X(t) > m] ^ E [X(t) : X(t) > m] < E [X(t) : r m < r] 

E [X(t) : r m < t] . (1-21) 

Since, P-almost surely, r m | co for m —*■ co, it follows that 

lim sup{E[X(r) : X(r) > m] : re [0, f] : r stopping time} = 0. 

m—> oo 

Consequently, the sub-martingale t >-*■ X(t) is of class (DL). The proof of Propo¬ 
sition 1.25 is complete now. □ 


It is perhaps useful to insert the following proposition. 

1.26. Proposition. Processes of the form M(t) + A(t), with M(t ) a martingale 
and with A(t ) an increasing process in L 1 (0,T, P) are of class (DL). 


Proof. Let {X(t) = M(t) + A(t) : f 5= 0} be the decomposition of the 
sub-martingale {X(t) : t ^ 0} in a martingale [M(t) : t ^ 0} and an increasing 
process {A(t) : t ^ 0} with H(0) = 0 and 0 < r < t be any T r stopping time. 
Here t is some fixed time. For IVeNwe have 


E(|X(r)| : |X(r)| ^ N) < E(|M(r)| : |X(r)| ^ N) + E (A(t) : |X(r)| 5* N) 

<E(|M(f)| : \X(r)\ > N)+E{A(t) : \X(r)\ ^ N) 
<E(|M(*)| +A(t) : |X(r)| 5* N) 


< E ( \M(t)\ + A(t) : sup \X(s)\ > N 


Since, by the Doob’s maximality theorem 1.28, 


NF | sup \X(s)\ ^ N \ ^NF\ sup \M(s)\ ^—\+NFl sup A(s) > — 


f OsSssSt 

sS 2E (\M(t)\ + A(t )), 
it follows that 


1 


N 




N 




lim sup{E(|X(r)| : |X(r)| ^ N) : 0 < r < t, r stopping time} = 0. 

iV—► 00 

This proves Proposition 1.26. □ 


First we formulate and prove Doob’s maximal inequality for time-discrete sub¬ 
martingales. In Theorem 1.27 the sequence i >—> X* is defined on a filtered 
probability space (Q, Tj.P) ieN , and in Theorem 1.28 the process t i—> X(t) is 
defined on a filtered probability space (Cl, T t .P) t>0 . 


21 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


1.27. Theorem (Doob’s maximal inequality). Let (Xj) ieN be a sub-martingale 
w.r.t. a filtration (3q) ieN . Let S n = max^^ X; be the running maximum of 
X, t . Then for any £ > 0, 

P [S„ < )e K], (1.22) 

where Xfi = X n v 0. In particular, if X* is a martingale and M n = max \Xf\, 

l ^ i^n 

then 

P [M n > £] < jE [\X n \ 1 {Mn>l} \ jE [\X n \\. (1.23) 

Proof. Let T£ = inf {i ^ 1 : X, ^ £}. Then P [S n ^ £\ = P \ji = i\. 
For each 1 < i ^ n, 

P[r ( - i] - E [l w „,l {T ,. j ,] ' Ie [X+Mr.-i )}. (1.24) 

Note that {re = i} e 3q, and Xf is a sub-martingale because X t itself is a 
sub-martingale while <p(x) = x + = x v 0 = max(x, 0) is an increasing convex 
function. Therefore 

\ p^n l{r^=i} | 5F»] = l { re = i}E \ X n | Tj] ^ 1 { r £= i } (lE [X n | 5)]) ^ lf T (= ijX i , 

and hence E [X+ lq T£= q] < E[X+l{ T£= q], Substituting this inequality into 
(1.24) and then summing over 1 < i < n then yields (1.22). The inequality in 
(1.23) follows by applying (1.22 to the sub-martingale \Xi\. □ 



A cate-Lucent 


www.alcatel-lucent.com/careers 


What if 
you could 
build your 
future and 
create the 
future? 


One generation’s transformation is the next’s status quo. 
In the near future, people may soon think it’s strange that 
devices ever had to be “plugged in.” To obtain that status, there 

needs to be “The Shift". 


22 

Download free eBooks at bookboon.com 













Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


Next we formulate and prove Doob’s maximal inequality for continuous time 
sub-martingales. 

1.28. Theorem (Doob’s maximal inequality). Let (X(t)) t>0 be a sub-martingale 
w.r.t. a filtration (T t ) t>0 . Let S(t ) = sup 0s£ss£4 X(s) be the running maximum 
of X{t). Suppose that the process t >—> X(t) is P -almost surely continuous from 
the right (and possesses left limits F-almost surely). Then for any i > 0, 

P[S(f) > e] « i|E[A'(i) + l (s(1| «,] « )e [A + (f)], (1.25) 

where X + (t ) = Xfi) v 0 = max(X(t),0). In particular, if t *—> X(t) is a 
martingale and M(t ) = sup |X(f)| ; then 

OsSssJt 

p [M(t)>e\ )e[|A(*)|1,=S )e[|A(()|], (1.26) 

Proof. Let, for every N e N, tjv be the (Tt) tSs0 -stopping time defined 
by t n = inf {t > 0 : X{t) + 5= N). In addition define the double sequence of 
processes X n ^(t) by 

X n , N (t ) = X (2~ n \2 n t] a t n ) . 

Theorem 1.28 follows from Theorem 1.27 by applying it the processes t h-> 
X n! N(t), n 6 N, N e N. As a consequence of Theorem 1.27 we see that Theorem 
1.28 is true for the double sequence t >—> X ri ^r(t), because, essentially speaking, 
these processes are discrete-time processes with the property that the processes 
(n, t) >—>■ X nt N(t) + attain P-almost surely their values in the interval [0, N], 
Then we let n —> go to obtain Theorem 1.28 for the processes t >—► X (t a t/v), 
N e N. Finally we let N —*■ oo to obtain the full result in Theorem 1.28. □ 

5.2. Additive processes. In this final section we introduce the notion 
of additive and multiplicative processes. Let E be a second countable locally 
compact Hausdorff space. In the non-time-homogeneous case we consider real¬ 
valued processes which depend on two time parameters: [ti,t 2 ) >-*■ Z (ti,t 2 ), 
0 < t\ < <2 ^ T. It is assumed that for all 0 < t\ ^ <2 ^ T, the variable 
Z (ti,t 2 ) only depends, or is measurable with respect to, a (X(s) : t\ < s < t 2 ). 
Such a process is called additive if 

z (ti,t 2 ) = Z(ti,t) + Z (t,t 2 ), 

The process Z is called multiplicative if 

z (ti,t 2 ) = z (ti, t) ■ z (■ t , t 2 ), ti^t^ t 2 . 

Let p : [0, T] x E —> E be a continuous function, and let {X(t) : 0 < t < T) be 
an E- valued process which has left limits in E , and which is right-continuous 
(i.e. it is cadlag). Put Z (ti,t 2 ) = ^ p (s, X(s)) ds. Then the process (ti,t 2 ) i—>• 
Z (ti, t 2 ), 0 < ti < t 2 < T is additive, and the process (fi, t 2 ) i—> exp (Z (ti,t 2 )), 
0 < ti < t 2 < T, is multiplicative. 


23 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


Next we consider the particular case that we deal with time-homogeneous pro¬ 
cesses like Brownian motion: 


m Px), (M n , ® n )}, 


which represents Brownian motion starting at i e if An adapted process 
t Z(t) is called additive if Z (s + t) = Z (s) + Z (t ) o {) s , P x -almost surely, 
for all s, f P 0. It is called multiplicative provided Z (s + t) = Z (s) ■ Z ( t ) o $ s , 
P,, : -alinost surely, for all s, t P 0. Examples of additive processes are integrals 
of the form Z(t) = p (X(s)) ds, where x ^ p(x) is a continuous (or Borel) 
function on W ! , or stochastic integrals (Ito, Stratonovich integrals) of the form 
Z(t) = §yP (X(s)) dX(s). Such integrals have to be interpreted in some L 2 - 
sense. More details will be given in Section 6 . If t >—> Z(t) is an additive 
process, then its exponent t <—>■ exp (Z(t)) is a multiplicative process. If T is a 
terminal stopping time, then the process t l{r>t} is a multiplicative process. 


Let (X„) neN be a sequence of non-negative i.i.d. random variables each of which 
has density /i P 0 . Suppose that f n is the density of the distribution of 
2 ” =1 Xj. Note “i.i.d.” means “independent, identically distributed”. Then 


P 




1 fn(s)ds, 

Jo 

and hence 



rt 


n+1 


n 

j 

f n+1 (s)ds = P 


= p 

Xj + x n+ i ^ t 

Jo 


J=1 


j= i 


f 


= fn(p)fl(t- p)dp. 


It follows that 


rt r f rt ft fp 

fn(s)ds — f n+1 (p)dp= f n (s)ds- f n (s)f 1 (p-s)dsdp 

Jo Jo Jo Jo Jo 

f n (s)ds- fn(s ) fi(p — s)dpds 

Jo Jo Js 


fi(p-s)dp) ds 


rt rCC 

= fn(s) flip — s)dpd& 

Jo Jt 

rt rco 

= fn(s ) fi{p)dpds. 

JO Jt-S 


(1.27) 


If h(s) = Xe x ‘, then f„(s) 


\n g n -1 

7 -rre _As . This follows by induction. 

(n — 1 )! 


5.3. Continuous time discrete processes. Here we suppose that the 
process 

{( 0 ,T,P),(X(t): t> tp0),(5,S)} 


24 


Download free eBooks at bookboon.com 









Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


is governed by a time-homogeneous or stationary transition probabilities: 

= P [*(*) = 3 I V(0) = i] = P [X(t + s)=j\ X(s) =i], i, j e 5, 

(1.28) 

for all 5^0. Here, S is a discrete state space, e.g. S — S — 7LJ 1 , — hi, or 

S = {0,7V}. The measurable space (fl,T) is called the sample or sample path 
space. Its elements uo e are called realizations. The mappings X(t) : —> 5 

are called the state variables; the application t •—> X(i)(6j) is called a sample 
path or realization. The translation operators $*, t ^ 0, are mappings from 
to $2 with the property that: X(s) o $ t = X(s + i), P- almost surely. For 
the time being these operators will not be used; they are very convenient to 
express the Markov property in the time-homogeneous case. We assume that 
the Chapman-Kolmogorov conditions are satisfied: 

Pj,i (s + t) = i, je S, s,t> 0. (1.29) 

keS 


In fact the Markov property is a consequence of the Chapman-Kolmogorov 
identity (1.29). From the Chapman-Kolmogorov (1.29) the following important 
identity follows: 


P (s + t) = P(s)P(t), 5, t > 0. (1.30) 

The identity in (1.30) is called the semigroup property; the identity has to be 
interpreted as matrix multiplication. Suppose that the functions t ► p^(i), j, 
i e S, are right differentiable at t = 0. The latter means that the following 
limits exist: 


qj,i = lim 

• y ’ A|0 


Pj,j ( A ) -Pj,i(0) 

A 


i, j e S. 





In the past four years we have drilled 



81,000 km 



That's more than twice around the world. 



Whn am wp? fHSHHHH 


P 

We are the world's leading oilfield services company. Working 1 


globally—often in remote and challenging locations—we invent, 
design, engineer, manufacture, apply, and maintain technology 
to help customers find and produce oil and gas safely. 



Who are we looking for? 

We offer countless opportunities in the following domains: 

■ Engineering, Research, and Operations ^ 

■ Geoscience and Petrotechnical 

■ Commercial and Business 

A ^ 


If you are a self-motivated graduate looking for a dynamic career, 
apply to join our team. 

What will you be? 

careers.slb.com 

Schlumberger 


25 



Download free eBooks at bookboon.com 





Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


We assume that Pj,i(0) = , where is the Dirac delta function: t = 0 if 

j # i, and 8 hJ = 1 . Put Q = (q ]a ) t je s- Then the matrix Q is a Kolmogorov 
matrix in the sense that q hl pz 0 for j ^ i and X? e s = 0 - If follows that 
Qi,i = ~ Xje 5 j^i Qjq T 0. The reason that the off-diagonal entries j ^ i, are 
non-negative is due to the fact that for j ^ i we have 




lim PjM) -PjM 
40 t 


PjAt) 4 ( 0 ) 

40 t 


lim 

40 


PjAt) 

t 




0. 


In addition, we have 


E 

PS 




= y lim 
40 

PS 


PjAA ~ PjA°) 


lim y 

40 4-* t 


— lim HjesPjA®) _ 1~1 _ q 

4 o t 40 t 


(131) 


provided we may interchange the summation and the limit. Finally we have the 
following general fact. Let t >—> P(t ) be the matrix function t >—> (PjAA)i jeS' 
Then P(t) satisfies the Kolmogorov backward and forward differential equation: 

dP(t) 


dt 


= QP{t) = P(t)Q, t> 0. 


(1.32) 


The first equality in (1.32) is called the Kolmogorov forward equation, and the 
second one the Kolmogorov backward equation. The solution of this matrix¬ 
valued differential equation is given by P(t) = e tCi P( 0 ). But since P( 0 ) = 
(PjA®))jieS = AjAi jeS identity matrix, it follows that P(t) = e tC * . The 

equalities in (1.32) hold true, because by the semigroup property (1.30) we have: 


P (t + A(t)) - P (t) P(A(())»P(0) 


P(t)-P(t) 


A(t) A (t) 

Then we let A (t) tend to 0 in (1.33) to obtain (1.32). 


P(A(t))-P(0) 

m 


(1.33) 


5.4. Poisson process. We begin with a formal definition. 

1.29. Definition. A Poisson process 

m T, P), (N, X)} 

(see (1.46) below) is a continuous time process X(t), t ^ 0, with values in 
N = {0,1,...} which possesses the following properties: 

(a) For At > 0 sufficiently small the transition probabilities satisfy: 
Pi+i t i(At) = P [X (t + At) = i + 1 | X(t) = i\ = XAt + o (At ); 

p iti (At) = P [X (t + At) = i | X(t) = i] = 1 — XAt + o (At ); 

Pj,i(Xt) = P [X (t + At) = j | X(t) = i] = o (At ); 

Pj,i(At) = 0, j < i. (1.34) 

(b) The probability transitions (s, i\ t,j) >—► P [X (t) = j | X(s) = i], t > s, 
only depend on t — s and j — i. 

(c) The process (X(t) : t ^ 0} has the Markov property. 


26 


Download free eBooks at bookboon.com 













Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


Item (b) says that the Poisson process is homogeneous in time and in space: (b) 
is implicitly used in (a). Note that a Poisson process is not continuous, because 
when it moves it makes a jump. Put 

Pi(t) = Pio(t) = p j+ i,j(t) = F[X(t) = j + i\ X(0) = i\, i, j e N. (1.35) 

1.30. Proposition. Let the process 

m 3b P) ,(X(t),t>O),(0 t ,t>O), (N, N)} 

possess properties (a) and (b) in Definition 1.29. Then the following equality 
holds for all t ^ 0 and i e N: 

pm - < L36 > 

1.31. Remark. It is noticed that the equalities in (1.42), (1.40), and (1.44) only 
depend on properties (a) and (b) in Definition 1.29. So that from (a), and (b) 
we obtain 

^Piif) + A pfit) = iy e ~ Xt = APi-iW, * > 1, (1-37) 

and hence 

Pjfit) = ) = P [^(f) = j I -X'(O) = i] = jjz^y e ~ X \ 3 > *• ( 1 .38) 

If 0 ^ j < i, then Pjfit) = 0. 

Proof. By definition we see that Pj(0) = P [X(0) = j | X(0) = 0] = 5 0 j, 
and so pfiO) = 1 and pfi 0) = 0 for j / 0. Let us first prove that the functions 
t >—>■ pfit), i ^ 1, satisfy the differential equation in (1.45) below. First suppose 
that i ^ 2, and we consider: 

Pi (:t + At) - pfit) = P [X (t + At) = i] - pfit) 

i 

= J] P [X (t + At) = i, X(t) = k]~ pfit) 

k= 0 
i 

= J] P [X (t + At) = * | X(t) = k] P [X(t) = k\- pfit) 

k= 0 

= P [X (t + At) = i | X(t) = i] pfit) + P [X (t + At) = i | X(t) = i - l] Pi _fit) 

i—2 

+ 'Yj P [ x (t + At) = i | X(t) = k] pfit) - pfit) 

k= 0 

i-2 

= (1 - XAt + o (At)) pfit) + (XAt + o (At)) Pi-fit) + ^pfit)o(At) - pfit) 

k =o 
i 

= -XAtpfit) + XAtpi-fit) + ^pfit)o(At). (1.39) 

k =0 


27 


Download free eBooks at bookboon.com 





Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


From (1.39) we obtain 

= -Api(t) + Xpi-i(t). (1.40) 

Next we consider i = 0: 

Po (t + At) - p 0 (t ) = P [X {t + At) = 0] - p 0 (t) 

= P [X (t + At) = 0 | X(t) = 0] P [X(t) = 0] - po(t) 

= P [X (t + At) = 0 | X(t) = 0]p 0 (t) — p 0 (t) = (—AAt + o(At))p 0 (t). (1.41) 
From (1.41) we get the equation 

^Po(^) = — Apo(t). (1.42) 

For i = 1 we have: 

Px (t + At) -piit) = P[X(t + At) = 1] -pi{t) 

= P [X (t + At) = 1 | X(t) = 1] P [X(t) = 1] - p^t) 

+ P [X {t + At) = 1 | X(t) = 0] P [X(t) = 0] 

= P [A it + At) = 1 | X{t) = 1] p^t) - p^t) 

+ P [A (t + At) = 1 | X(t) = 0] poit) 

= (—AAt + o(At))pi(t) + (AAt + o(At))p 0 (t). (1-43) 



Join the best at 
the Maastricht University 
School of Business and 
Economics! 

gjpj* 

• 33 rd place Financial Times worldwide ranking: MSc 
International Business 

• 1 st place: MSc International Business 

• 1 st place: MSc Financial Economics 

• 2 nd place: MSc Management of Learning 

• 2 nd place: MSc Economics 

• 2 nd place: MSc Econometrics and Operations Research 

• 2 nd place: MSc Global Supply Chain Management and 

Change 

Sources: Keuzegids Master ranking 2013; Elsevier 'Beste Studies' ranking 2012; 

Financial Times Global Masters in Management ranking 2012 


Maastricht 

University is 
the best specialist 

university in the 
Netherlands 

(Elsevier) 

Master's Open Day: 22 February 2014 

www.mastersopenday.nl | 



Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


From (1.43) we obtain: 


d 

dt 


pi(t) 


—Xpi(t) + Xpo(t). 


(1.44) 


By definition we see that Pj(0) = P [X(0) = j | X(0) = 0] = h 0 j, and so p o (0) = 
1 and pj( 0) = 0 for j / 0. From (1.42) we get po(t) = e~ xt . From (1.40) and 
(1.44) we obtain 


4 ( e M pi(t )) = Ae Ai pj_i(t), 

(xtY 

By induction it follows that pfit ) = — ^-e~ Xt . 
Proposition 1.30. 


i > 1. (1.45) 

This completes the proof of 

□ 


In the Proposition 1.33 below we show that a process 

m ^ P) > > o), (0 t , t> 0), (N, N)} (1.46) 

which satisfies (a) and (b) of Definition 1.29 is a time-homogeneous Markov 
process if and only if its increments are P-independent. First we prove a lemma, 
which is of independent interest. 

1.32. Lemma. Let the functions pfit) he defined as in (1.35). Then the equality 

Pi (t)=V[X(s + t)-X(s) = i] (1.47) 

holds for all i e N and all s, t ^ 0. 

Proof. Using the space and time invariance properties of the process X(t) 
shows: 

00 

P [X(s + t)~ X(s) = i] = 2 P [X (s + t) - X(s) = i, X(s) = k ] 

k =0 

oo 

= ^p[i(m) = * + fc, x(s) = k] 

k =0 

oo 

= 2 P [X (s + t) = i + k | X(s) = k] P [X(s) = k] 

k =0 

(space and time invariance properties of pfit)) 

00 

= = ( L48 ) 

k =0 

The conclusion in Lemma 1.32 follows from (1.48). □ 

The following proposition says that a time and space-homogeneous process sat¬ 
isfying the equalities in (1.34) of Definition 1.29 is a Poisson process if and only 
if its increments are P-independent. 


29 


Download free eBooks at bookboon.com 



Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


1.33. Proposition. The process ]X(f) : t ^ 0} possessing properties (a) and 
(b) of Definition 1.29 possesses the Markov property if and only if its increments 
are ¥-independent. Moreover, the equalities 

P[A'(t) - X(s) -j-i]- P[A'(t) - j | A(a) = i] 

= p^(t -s)= (1.49) 

hold for all t ^ s 5= 0 and for all j ^ i, i, j e N. 

PROOF. First assume that the process in (1.46) has the Markov property. 
Let t n+ 1 > t n > ■ ■ ■ > ti > t 0 = 0, and let if., 1 ^ k ^ n + 1, be nonnegative 
integers. Then by induction we have 

P [X (D) - X (ti_ i) = ii, 1 ^ £ < n + 1] 

00 

= ^ P [X (tf) - X (te- 1 ) = i e , 1 n + 1, X (t n ) = k] 

k =o 

y P[X (U) - X (ti- 1 ) = 1 < l < n + 1, X (f n ) = fc] 

P [X (ti) - X (ti- 1 ) = ii, 1 n, X (t n ) = k ] 

x P [X (tf) - X (ti- 1 ) = ii, 1 < t < n, X (t n ) = fc] 

00 

= J] P [X (t n+ i) - X (t n ) = i n+ 1 | X (^) - X (ti- 1) = **, X (t n ) = fc, 

k=0 

1 < £ < n] 

x P [X (t*) - X (ti- 1 ) = i e , X (t n ) = fc, 1 < £ < n, ] 

(Markov property) 

00 

= 2 P [X (t n+1 ) - X (t n ) = i n+1 | X (t n ) = fc] 

k =0 

x P [X (ti) - X (ti- 1 ) = ii, 1 i n, X (t n ) = fc] 

00 

= J] P [X (t n+ i) = i n+ 1 + fc | X (f n ) = fc] 

k—0 

X P [X (b) - X (t^_i) = ii, 1 < £ < n, X (t n ) = fc] 

(homogeneity in space and time of the function t >—> Pi n+1 (t)) 

00 

= 2 Pi n+1 (tn+1 - t n ) P [X (ti) ~ X (ti- 1 ) = ii, 1 ^ n, X (t n ) = fc] 

k =0 

(apply equality (1.47) in Lemma 1.29) 

00 

= J>[X (t n+1 ) - X (t n ) = i n+1 ] 

k—0 

P [X (tf) - X (ti-i) = ii, 1 < t < n, X (t n ) = fc] 


30 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


= P [x (t n+ 1 ) - X (t n ) = i n+ 1 ] P [X (tg) - X (tg- 1 ) = t*, 1 < i < n]. (1.50) 

By induction and employing (1.50) it follows that 

n 

P [X (tg) - X 0 tg _i) = i* 1 < £ < n] = f] P [X (tg) - X ( tg _,) = i<] 

€=1 

n 

= \\pi IL {tg-tg- l ). (1.51) 

£=i 

We still have to prove the converse statement, i.e. to prove that if the increments 
of the process X(t) are P-independent, then the process X(t) has the Markov 
property. Therefore we take states 0 = t 0 , ti,... ,i n , t„+ i, and times 0 = to < 
tg < • ■ ■ < t n < t n+ 1 , and we consider the conditional probability: 

P [X (tn+i) = i n +1 | X (to) = to, • • •, x (t n ) = i n ] 

_ P [X (t n+ i) = i n+ 1, X (t 0 ) = t 0 ,...,X (t n ) = t n ] 

P [X (t 0 ) = t 0 ,..., X (t n ) = i n ] 

_ P [X (t 0 ) = t 0 , X (tg) - X (tg- 1 ) =ig- ig-i, 1 ^ l ^ n + 1] 

P [X (t 0 ) = t 0 , X (tg) - X (tg- 1 ) = t f - t^_i, 1 ^ £ < n] 

(increments are P-independent) 

= P [X (t n+ 1) - X (t n ) = i n+ 1 - t n ] 

= P [X (t n +i) = i n +i I x (t n ) = i n ] . (1.52) 


> Apply now 



REDEFINE YOUR FUTURE 

AXA GLOBAL GRADUATE 
PROGRAM 2015 


redefining /standards £ 



31 




Click on the ad to read more 


Download free eBooks at bookboon.com 










Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


The final equality in (1.52) follows by invoking another application of the fact 
that increments are P-independent. More precisely, since X (t n + 1 ) — X (t n ) and 
X (t n ) — X(0) are P-independent we have 

P [X (t n+ 1) = i n+ 1 | X (t n ) = i n \ 

= P [X (t n+ 1) - X (tn) = in+l - in, X (t n ) - X (0) = Z n ] 

P[X(t n )-X(0)=i n ] 

= P [X (t n + 1) - X (t n ) = i n+ 1 - i n ] . (1.53) 

The equalities in (1.49) follow from equality (1.47) in Lemma 1.32, from (1.53), 
from the definition of the function Pi{t) (see equality (1.37)), and from the 
explicit value of pAt) (see (1.36) in Proposition 1.30). This completes the proof 
of Proposition 1.33. □ 

Let (fl, T. P) be a probability space and let the process t h-> N(t) and the proba¬ 
bility measures P j, j e N in j(fi, T, P 'j)j eN , (N(t) : t ^ 0), (d s : s ^ 0), (N, N) j 
have the following properties: 

(a) It has independent increments: N(t + h) — N(t ) is independent of 

= ex (N(s) - X(0) : 0 < s ^ t). 

(b) Constant intensity: the chance of arrival in any interval of length h is 
the same: 

P [N(t + h)~ N(t) >1] = Xh + o(h). 

(c) Rarity of jumps ^ 2: 

P [N(t + h)~ N(t ) >2]= o(h). 

(d) the measures P 7 , j ^ 1, are defined by: Pj[X] = P [X | N( 0) = j]; 
moreover, it is assumed that Pq[X(0) = 0] = 1. 

The following theorem and its proof are taken from Stirzaker [126] Theorem 
(13) page 74. 

1.34. Theorem. Suppose that the process N(t ) and the probability measures 
satisfy (a), (b), (c) and (d). Then the process N(t) is a Poisson process and 

V j [N(t) = k]= ( ^Xye~ x ^\ k>j. (1.54) 


PROOF. In view of Proposition 1.33 it suffices to prove the identity in (1.54). 
To this end we put 

fn(t) = Po [N(t) =n]= P [N(t) = n | X(0) = 0] = P [N(t) - N( 0) = n ]. 
Then we have, for n ^ 2 fixed, 

n 

f n (t + h) = P 0 [X (t + h) = n] = J] P 0 [X (/ I //) N(t) = k, N(t ) = n - k] 

k =0 


32 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


Stochastic processes: prerequisites 


(the variables N(t + h) — N(t ) and N(t) are P 0 -independent) 

n 

= 2 P 0 [N (t + h) - N(t) = k] x P 0 [N(t) = n - k] 

k =0 

= P 0 [N (;t + h ) - N(t) = 0] X P 0 [N(t) = n ] 

+ P 0 [AT (t + h) - N(t ) = 1] x P 0 [iV(t) = n - 1] 

n 

+ 2 Po [N (t + h) — 7V(t) = fc] X P 0 [N(t) = n - k] 

k =2 

= (1 - P 0 [iV (t + h) - N(t ) ^ 1]) X P 0 [iV(i) = n] 

+ P 0 [AT (t + h)- N(t) > 1] x P 0 [iV(t) = n - 1] 

- P 0 [iV (t + h) - N(t ) ^ 2] x P 0 [AT(t) = n - 1] 

n 

+ Y l Wo[N(t + h)- N(t ) = fc] x P 0 [iV(t) = n - /c] 

k=2 


= (1- Xh + o(h )) x / n (t) + (Ah + o(h)) f n -i(t) + o(h ) /»—*(t) 



k =1 

= (1 - XH) f n (t ) + Xhf n -i(t) + o(h). 

(1.55) 

Observe that a similar argument yields 


fi(t + h) = (1 - Xh ) /i (t) + Xhf 0 (t) + o(h), 

(1.56) 

and also 


fo(t + h) = (1 — Ah) fo(t) + o(h). 

(1.57) 

From (1.55), (1.56) and (1.57) we obtain by rearranging, dividing by h and 


allowing h [ 0: 

/,',(*) = -A/n(<) + A/„_i(t), »>1, 

/;<<) - -a/m. 

These equations can be solved by induction relative to n. A alternative way is 
to consider the generating function 

00 00 

G(s,t) := E 0 [e sJV W] = £ s"P 0 [iV(t) = n] = £ s n /nW- 

n=0 

Then ~^r ~ = M s ~ l)G(s,t), and so G(s,t ) = e A ^ s_1 ). It follows that 

(Xt) n 

Po \N(t) = n] = f n (t) = e~ xt ——. Consequently, for k ^ j we obtain 

n\ 

Fj [N(t) = k]= P [AT(t) = k | iV(0) = j] = P [N(t) - N(0) = k-j \ N(0) = j] 

= P [N(t) - N(0) = k - j] = e ~ A(fc ~ j) nf- j)\ = ( RHS ° f ^ L54 )- 

This completes the proof of Theorem 1.34. □ 


33 


Download free eBooks at bookboon.com 





Download free eBooks at bookboon.com 



Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


CHAPTER 2 

Renewal theory and Markov chains 


Our main topic in this chapter is a discussion on renewal theory, classification 
properties of irreducible Markov chains, and a discussion on invariant measures. 
Its contents is mainly taken from Stirzaker [126]. 

1. Renewal theory 

Let (X r ) re ^ be a sequence of independent identically distributed random vari¬ 
ables with the property that P[X r > 0] > 0. Put S n = 2r=i X r , <5o = 0, and 
define the renewal process N(t) by N(t) = max{n : S n < t}, t ^ 0. The mean 
m(t) = E [7V(t)] is called the renewal function. We have N(t) ^ n if and only 
if S n A t, and hence 

P [N(t) = n] = P [S n ^ t] — P [5 n+ i ^ t], and (2.1) 

00 00 

E [N(t)] = 2 P [N(t) > r] = 2 P [S r < t ]. (2.2) 

r— 1 r— 1 

For more details see e.g. [4] (for birth-death processes) and [126] (for renewal 
theory). 

2.1. Theorem. IfE[X r \ > 0, then N(t ) has finite moments for all t < go. 

Proof. Since E [X r ] > 0 there exists e > 0 such that P [X r ^ e] ^ e. 
Put M(t) = max [n : sYT r =i 1 {x r ^e} < t}. Since ejfr=i 1 {x r >e} < lTr=i X r it 
follows that N(t) ^ M(t), and hence, with m = 

00 00 

E [N(t)] A E [M(t)] = F [M(t) > n] = P 

n=l n=l 

oo 

= 2 2 P [Xj > £, j £ A, Xj < s, j f A] 

n= 1 Ac{l,...,n} ; 

oo 

= 2 2 P[Xi> e]* A (1 - P [X x ^ e]) n ~* A 

n= 1 Ac{l,...,n} ; 
oo nAm / \ 

= E E (o p [A »T (i - p pa == 

n=lfc=0 

m oo / \ 

S " (l-P[XiS£])-‘ 

k =0 ' 


n 

e 2 ^ t 
r=l 


35 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2 «• ixi > T £ 


fc =0 


1 


P [X r > 


n=0 

t 

S 


n + k 


n 


(1-PpC >e]) r 


+ 1 


(2.3) 


In the final equality in (2.3) we used the equality: ^ 


71 = 0 


n + k 


n 


z k = 


a 


for \z\ < 1. The inequality in (2.3) shows Theorem 2.1. 


{ k +1 


□ 


It follows that E [X(t)] is finite whenever E [X r ] is strictly positive. This fact 
will be used in Theorem 2.2. 

2.2. Theorem. The following equality is valid: 

E[S m+1 ]=E[X 1 ]E[N(t) + l]. 


The equality in Theorem 2.2 is called Wald’s equation. 


Proof. The time N(t) + 1 is a stopping time with respect to the filtration 

T n = a (X r : 0 < r < n) = a (S r — rE [Xi] : 0 < r < n). 

Notice that the process n >—>■ S n — nE [Xi] is a martingale, and hence 

E [5(jv( t ) + i) An - ((N(t) + 1) a n) E [XJ] 

= E [S m)+ i ) a0 - {(N(t) + 1) a 0) E [X,]] = 0. (2.4) 

Since E [X(t)] is finite, from (2.4) we get by letting n tend to go: 

0 = lim E [S(tf(t)+i)An - ((N(t) + 1) a n) E [X x ]] 

= E [S (w(t|+1) - (( N(t ) + 1)) E [A',]]. (2.5) 

Consequently, the conclusion in Theorem 2.2 follows. □ 

2.3. Theorem. Let (X r ) reN be a sequence of independent, identically distributed 
random variables such that P [X r = 0] = 0. Put So = 0 and S n = X!r=i ^r- Let 
the process N(t ) be defined as in (2.2). Let Fit ) be the distribution function of 
the variable X r . Put m(t) = E [X(t)]. Then m(t ) satisfies the renewal equation: 

ft °° 

m{t) = Fit) + m(t - s)dF(s) = ^ (/j,* F ) k [0,t], (2.6) 

k =1 

where fj,F{a,b] = F(b) — F(a), and pti * b] = J* l( a ,6](s + t)dfj,i(s)d/j, 2 (t), 
0 < a < b (i.e. convolution product of the measures /ii and fi 2 )- Moreover, 

( rCO \ r*00 rCO 

1 - J e~ Xs dF(s)J x A J e~ xt m{t) dt = J e“ As dF(s). 

If X r are independent exponentially distributed random variables, and thus the 
process (N{t) : t ^ 0) is Poisson of parameter A > 0, then m{t) = A t. 


36 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


PROOF. On the event {Xi > t} we have N(t) = 0, and hence by using 
conditional expectation we see 

m(t) = E[N(t)] = E[N(t)l {Xl<t} ] 


E [E [iV(t)l 


{Xi 


(Vi)]] 


- E [l {Jfl «,E PW — JV (ATi) | <7- (X0]] + E [l |Xl «,E [N (A',) | o (A t )]] 
(on the event {X\ < t) we have N (Xi) = 1) 

- E [1( X ,«)E PW (*i)]] + E [l{X lS £t}E [iRW]] 

(the distribution of N(t)—N(s ), t > s, is the same as the distribution of N(t—s )) 
= E [l { x^ t} E [N (t -X x )\a (X:)]] + E [1 {Xl & } E [1 | o (X,)]] 

= E [N (t — Xi) l{Xi^t}] + E [l{Xi<t}] 

= I m(t — x) dF(x) + F(t). 


This completes the proof of Theorem 2.3. 


(2.7) 

□ 



Empowering People. 
Improving Business. 


Norwegian Business School is one of Europe's 
largest business schools welcoming more than 20,000 
students. Our programmes provide a stimulating 
and multi-cultural learning environment with an 
international outlook ultimately providing students 
with professional skills to meet the increasing needs 
of businesses. 


B! offers four different two-yea i; full-time Master of 
Science (MSc) programmes that are taught entirely in 
English and have been designed to provide professional 
skills to meet the increasing need of businesses.The 
MSc programmes provide a stimulating and multi¬ 
cultural learning environment to give you the best 
platform to launch into your career 

* MSc in Business 


* MSc in Financial Economics 


* MSc in Strategic Marketing Management 


NORWEGIAN 
BUSINESS SCHOOL 


EFMD 

EQUIS 


*ffi 


* MSc in Leadership and Organisational Psychology 

www.bi.edu/master 



Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2.4. Lemma. Suppose P[X r < oo] = 1. 

lim N(t) = go, 

t —>00 


Then 

P -almost surely. 


( 2 . 8 ) 


Proof. Put Z = lim N(t) = supiV(t). Observe that 
t_>0 ° o 

hence by letting t —> go, the event {Z < go} is contained in 
thus 


N(t )+1 

2 A * 


fc = l 


Z+l 

u w - 


r=l 


> t, 


go}, 


and 

and 


U w - »} 

r —1 

< 2 P [X r = oo] = 0. 

r— 1 

The result in Lemma 2.4 follows from (2.9). 


[Z < go] = P 


z+i 


|^J {X r = oo} , Z < GO 


r— 1 


< p 


(2.9) 

□ 


Since lim^, x , N(t) = go P-almost surely, we have lim ~ = 1 P-almost 

surely. The following proposition follows from the strong “law” of large numbers 
(SSLN). 

2.5. Proposition. Let (X r ) ? , eN be a sequence of non-negative independent, iden¬ 
tically distributed random variables in L 1 (f2,£F, P) such that P [X r < go] = 1. 
Then 

lim = lim ' S " (,) 


t->CO N(t) + 1 t->oo N(t) 


= E [Xj , P -almost surely. (2.10) 


2.6. Theorem (First renewal theorem). Let the hypotheses be as in Proposition 
2.5. Then 

^ P -almost surely. (2-11) 


lim 

t->GO t 


EM’ 


PROOF. By definition we have S)v(t) ^ t < <S)v(t)+i, therefore 


5 


N(t) 


< 


t < X(f) + 1 <Sjv(p+i 


N(t)^N(t)' s N(t) N(t) + 1 (2 ‘ 12) 

The result in (2.11) now follows from (2.12) in conjunction with (2.8) and (2.10). 
This proves Theorem 2.6. □ 


The proof of the following theorem is somewhat more intricate. 


2.7. Theorem (Elementary renewal theorem). Let the hypotheses be as in 
Proposition 2.5. Ts above, put m(t ) = E[iV(f)]. Then 


, m(t) 1 

lim - = — y—TT. 

t-> oo t E [Xi] 


(2.13) 


38 


Download free eBooks at bookboon.com 
















Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2.8. Remark. From Theorem 2.6 and 2.7 it follows that the family 



is uniformly integrable. Here we use Scheffe’s theorem. 


Proof of Theorem 2.7. This equality has to be considered as two in¬ 
equalities. First we have t < 5jv(t)+i, and hence by Theorem 2.6 we see 

t < E [S m+1 ] = E [X,] (E [N(t)] + 1) = E [X,] (m(t) + 1). (2.14) 

The inequality in (2.14) is equivalent to 


m(t ) ^ 1 1 

~T ^ E[Xi] ~~ T 


(2.15) 


From (2.15) we see 


lim inf 

£—>oo 


m(t) 

t 


^ lim inf 

£—> oo 




1 

E [Xi] 


(2.16) 


For the second inequality we proceed as follows. Fix a strictly positive real 
number a, and put N a (t) = max{n e N : 2r=i m i n ( a >^) ^ ^}- Then N(t) < 
N a (t). Moreover, by Theorem 2.2 we have 


t > E [<Sjv a (t)] = E [<Sjv a (p+i - min (a, Xjv a (f)+i)] 

= E [min (a, X x )] E [N a (t) + 1] - E [min (a, X^q+i)] 

^ E [min (a, Xi)] E [N(t) + 1] — a = ( m(t ) + 1) E [min (a, Xi)] — a. (2.17) 


Hence, from (2.17) we obtain: 


m(t) 1 a — E [min (a, Xi)] 

t E [min (a, Xi)] + tE [min (a, Xi)] 

From (2.18) we deduce: 


lim sup 

£—>oo 


m(t) 

t 


< 


1 

E [min (a, Xi)] ’ 


By letting a —* go in (2.19) we see 


for all large a > 0. 


lim sup 

£—>oo 


m{t) 

t 


1 

E [X^ ‘ 


(2.18) 


(2.19) 


( 2 . 20 ) 


A combination of the inequalities (2.16) and (2.20) yields the result in Theorem 
2.7. □ 


Next we extend these renewal theorems a little bit, by introducing a renewal- 
reward process (R n ) neli , where “costs” are considered as negative rewards. We 
are also interested in the cumulative reward up to time t: C(t ) (the reward is 
collected at the end of any interval); C\ (t) (the reward is collected at the start of 


39 


Download free eBooks at bookboon.com 















Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


any interval); C P {t ) (the reward accrues during any given time interval). More 
precisely we have: 

N(t) 

c(t) = y Rj , terminal reward at the end of time interval, (2.21) 

i =! 

N(t)+1 

Ci{t) = ^ Rj, initial reward at the beginning of time interval, (2.22) 
j = 1 

N(t) 

Cp(t ) = ^ Rj + P N (t)+ 1 , partial rewards during time interval. (2.23) 

3 = 1 

For the corresponding reward functions we write 


c(t) = E [C(t)], d{t) = E [Ci(t)] and c p {t) = E [C P (t )]. 


(2.24) 


, r , C(t) CAt ) , C P (t) T . 

We are interested m the rates of reward: — : —, — : —, and — : —. It is assumed 


t 


t 


t 


that the renewal process N(t) is defined by inter-arrival times X r , reN. As 
above these inter-arrival times are non-negative, independent and identically 
distributed on a probability space (fl, IF, P). It is also assumed that the renewal- 
reward process R n , n e N, consists of independent and identically distributed 
random variables in the space L 1 (f2, £F, P). 



40 

Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


The following theorem will be proved. 


2.9. Theorem (Renewal-reward theorem). Suppose that 0 < E [XJ < go, 
eo^i] < go, and that the sequence (n 1 T > n ) neN is uniformly bounded in n e N 
and u>, and has the property that lim n ^oo n~ l P n = 0, P-almost surely. Let the 
notation be as in (2.21), (2.22), (2.23), and (2.24). Then the following time 

E [i?i] 

average limits exist P-almost surely and they are identified as 


iim m = , im m = Um m 

t — >00 t t—► 00 t ► oo t 

The following equalities hold as well: 


lim ^ = hrn ^ = lim 

► 00 t t—> oo t t—► 00 


E[i?i] 

E [Xi] ’ 

cp(t ) 


E[X x ] 
P-a/most surely. 

_ E[R X ] 

= E [Xi]' 


(2.25) 


(2.26) 


Observe that the quotient 


E[i?i] 


can be interpreted as the “expected reward 


E M 

accruing in a cycle” divided by “expected duration of a cycle”. 

Other conditions on the sequence (P n : n e N) can be given while retaining the 
conclusion in Theorem 2.9. For example the following conditions could be im¬ 
posed. The sequence (P n : n e N) is P-independent and identically distributed, 
or there are finite deterministic constants c\ and C 2 such that \P n \ ^ Cin + C 2 \R n \ 

P n [Pn 

and lim — = 0. In these cases the sequence ( — 
n—fco n \n 

Pn 

grable and lim — = 0 P-almost surely. 

n—>oo n 

Proof. By employing Theorem 2.2 and the strong law of large numbers we 
have 


: n e N ) is uniformly inte- 


lim C -21 = Urn _ EM, 

t >co t t—*co N(t) t E [Xl] 

In exactly the same manner, with N(t) + 1 replacing N(t). we see lim 

t >GO t 

E [i?i] P n 

— 7 —-r. By hypothesis we know that lim — = 0 P-almost surely. Since 
E [AiJ n— >00 n 

lim N(t) = go P-almost surely 


(2.27) 

Cfit) 


00 


we see that lim 


..... AT ) +1 _ g_ 'ppjg together with (2.27) shows that 

t -00 N(t) + 1 & v ; 


lim 

t—* 00 


Cp(t) E[i2i] 


t E [Xi]' 

These arguments take care of the P-almost sure convergence. 

Next we consider the convergence of the time averaged expected values. For 
convergence of time average of the reward function cfit) = E[Q(£)] we use 


41 


Download free eBooks at bookboon.com 




















Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


Wald’s equation (see Theorem 2.2) and the elementary renewal Theorem 2.7. 
More precisely we have: 


c i (t)=E[C i (t)]=E 


N(t) +1 

2 * 

_ 3 = 1 


= E [Ri] (E [N(t)] + 1). 


(2.28) 


Then we divide by t, take the limit in (2.28) as t tends to oo. An appeal to 
Theorem 2.6 then shows the existence of the limit lim — ^ = — [—4 which 


co t 


E[W] 


Rn 


is the second part of (2.26) in Theorem 2.9. First observe that lim —- = 0 

n—*co n 

P-almost surely. This can be seen by an appeal to the Borel-Cantelli lemma. In 
fact we have 


T 

n —1 


\Rn\ 

- > £ 

n 

00 

= 2> 
71=1 

\\Ri\ ^ ] 

-> n 

£ 

rcc 

< P 

Jo 

\\Ri\ I 

- > X 

£ 

dx < E 

1-1 


(2.29) 


Rn 


From (2.29) together with the Borel-Cantelli lemma it follows that lim —- = 0 

n— > co n 

P-almost surely. Consequently, the sequence -< — : n e N > is P-uniformly in¬ 


tegrate. Then we have 

I R 


n 


»«)+i| „ N(t)+ l2S )+1 |ii 


t 


and hence by Wald’s equality 




t N(t) + 1 ’ 


E 


R 


W(t)+l| 

t 


^ E 


N(t) + 1 Zk=l +1 I Rr 

t N(t) + 1 


m(t) + 1 


t 


(2.30) 


E[|i?!|], (2.31) 


By the strong law of large numbers and by the elementary renewal theorem 2.7 
we see that the families of random variables 


s£? +1 \k i _ N ( t )+1 stir 1 \k 


N(t )+1 


is uniformly integrable. Consequently the family 


N(t) + 1 ’ 
f R N (t )+1 


l t 


t > 0, (2.32) 

: t > 0 > is uniformly 


integrable, and hence it converges pointwise and in L 1 (12, T. P) to 0. Since 
c(t) 1 


E 


'N(t)+1 

S ft 

fc=1 




E [Ri] 


(2.33) 

This proves the first part of 


The right-hand side of (2.33) converges to . 

E [Xi\ 

(2.26) in Theorem 2.9. In order to prove the third part we need the uniform 


integrability of the family 


Pi 


N(t) +1 
t 


: t ^ 1 >. This fact is not entirely trivial. 


42 


Download free eBooks at bookboon.com 





































Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


Let the finite constant C be such that \P n +i\ < C(n + 1) for all n e N and P- 
almost surely; by hypothesis such a constant exists. From Remark 2.8 it follows 


that the family 


N(t) + 1 



is uniformly integrable. Since 


N(t) + l\ 

t 


it follows that the family 


PN(t)+ 1| N( t) + 1 < + 1 


N(t) + 1 
Pnw+i 


(2.34) 


t 


: t ^ 11 is uniformly integrable as well. If 
' ’ in L 1 (f^fF, P) as well as P- 


t t co then N(t) t co, and lim 

1 ’ w 1 ’ t->oo t E [Xi] 

almost surely: see Lemma 2.4, theorems 2.6, 2.7, and Remark 2.8. From (2.34) 

E [|P/v (t)+1 |] 

it follows that lim ——-— = 0, which concludes the proof of Theorem 

t->0 0 t 

2.9. □ 



Brain power 


By 2020, wind could provide one-tenth of our planet’s 
electricity needs. Already today, SKF’s innovative know¬ 
how is crucial to running a large proportion of the 
world’s wind turbines. 

Up to 25 7o of the generating costs relate to mainte¬ 
nance. These can be reduced dramatically thanks to our 
(^sterns for on-line condition monitoring and automatic 
lul|kation. We help make it more economical to create 
cleanSkdneaper energy out of thin air. 

By sh?fe|ig our experience, expertise, and creativity, 
industries can boost performance beyond expectations. 

Therefore we need the best employees who can 
kneet this challenge! 


Power of Knowledge Engineering 


Plug into The Power of Knowledge Engineering. 
Visit us at www.skf.com/knowledge 


43 



Download free eBooks at bookboon.com 










Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


1.1. Renewal theory and Markov chains. Next we consider this re¬ 
newal theory in the context of strong Markov chains. Let (Q, T, P) be a proba¬ 
bility space and let X m , me N, be a Markov chain on (Q, £F, P) with state space 
(S, S). Fix two states j and k e S. Define the sequence of stopping times Tjf\ 
reN, as follows: 


t: 


(r+l) 


= min {n > : X„ = k 


'}■ 


T fc (0) = 0. 


(2.35) 


If X n 4= k for n > Tjf\ then we put Tff^ L> = go. The sequence of differences 
— T^~ l \ r A 1, are Pj-independent and identically distributed. 

2.10. Theorem. Let f : [0,go] x S —» E be a bounded measurable function. 
Then 


-i(r+l) 


T! 




(r+s) 


= Ti r) + rr o o 


w 


/pNb !«'>-, IVJ 


->( r ) 


on | < coj, and 


(2.36) 


= E,- 


= E,- 


/ t; 


-,(r+s) 


AT 


,(r + s)) 1{ T W <00 J | (X [T^\X 


n(r) 


f (Tj: r+s) , X T( r + s )) Imd 


= cj i—> E 


'X (cd) 

^ ) M 


{TW<oo} 
60/ 


cr (T, 


i(r) 


y - / (TV) + T* (V) , v r ,, V) (V))J i {t ,. )<oo} h 
y - / (TV) + T 1 (V), v T ,. V) (V))] i {T (.. <<o( (u.). (2.37) 

Consequently, conditioned on the event j J) , :i < col /fte stochastic variable 


= cu Et 


7^( r + s ) _ t 1 

1 fc 1 k 


00 


and the a-field T„m are Pj -independent. 


Suppose that P& 


if < oo 


= 1 and P,- 


T fc (1) < oo 


> 0. TTien 


E,- 


T fc (r+1) < oo 


T fc (1) < oo 


and the variables T ( f ' +l} — T^\ reN, have the same distribution with respect 
to the probability measure A h-> P^ 


A | T fc (1) < oo 


Here Pj(H) = P [H | X 0 = j], A e 3 r , j e S. Theorem 2.10 is a consequence of 
the strong Markov property. 

PROOF. First we prove (2.36). On the event |t)^ < ooj we have 
T^ r+1 ^ = min j n > : X n = A; j = min j n > : X n _ T ( r ) o $ T w = /c j 

= T fc (r) + min jn - T fc (r) ^ 1 : ^ n _ T U) ° $ T w = fc J 
= T fc (r) + min | m ^ 1 : X m o = fc j 


44 


Download free eBooks at bookboon.com 






















Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


- + T k 


( 1 ) 


° &r(r) ■ 


(2.38) 


The equality in (2.38) shows (2.36) in case s = 1. We use (2.38) with s respec¬ 
tively r + s instead of r to obtain (2.36) by induction on s. More precisely we 
have 


rf + T k 


(*+1) 


o v> = rf + ( T i” + n x ' ° >U> I ° 


d 3 ) 


l(l) 


= Ti r) + Tl s) o riUo + o d T (, )+r ( S ) otf 


k 

^(1) 


.O’*) 


(2.39) 


(induction hypothesis) 


h r+ ' ) +if o tf T(r+ „ 


i(r+s+l) 


(2.40) 


where in (2.39) we employed (2.38) with s instead of r and in (2.40) we used 
r + s instead of r. The equality in (2.40) shows (2.36) for s + 1 assuming that it 
is true for s. Since by (2.38) the equality in (2.36) is true for s = 1, induction 
shows the equality in (2.36). 

Next we will prove the equality in (2.37). From equality (2.36) we get 


% 


/ [T t 


-r(r+s) 


x„ 


,(r + s) 


) 1 { T i rK 


’} 


37 


,(r) 


— E j 


Ej 


/I if + r;"ot H ,x 


W 


k ^ L 'j’( r ) i 


(r) j X {TW<oo} 


37 


(r) 


/ (r)+ (s)o ^ x „ ) 




o} I 


(the variable is 37,(r) -measurable in combination with the strong Markov 

1 k 

property) 


= oj 


= UJ 


E 


X ( r \ (i d) 

T k (“) 


/ ^rf(w) + T' t ”,X T t„ JJ 1j t w <co) (ui) 


n(s) 


Eb 


/( i rfM + rr,A' T <., /l ji {T ,. )<oor 

The equalities in (2.41) show that the first, penultimate and ultimate quantity 
in (2.37) are equal. Another appeal to the strong Markov property shows that 

the first and second quantity in (2.37) coincide. Since on the event j< ooj 
the equality X T ( r ) = k holds, the second and third quantity in (2.37) are equal 
as well. This proves that all quantities in (2.37) in Theorem 2.10 are the same. 
We still have to prove that on the event j T'j?' 1 < ooj the stochastic variable 

T A ( r+ ' s) — T^ ] and the a-field 3 r m are P ; -independent. This can be achieved 

k 

as follows. Let the event A be $ t (t )-measurable and let g : [0, oo] —► M be a 

k 

bounded measurable function. Then we have 


-l(s) 


.H- 


(2.41) 


E,- 


a (if * 1 - if 


1a 1 {tM<oo}J _ E i 


g ( T^ o tf T ( r) J 1 A 1 j T w <00 j (2.42) 


— E j 


Ej 


g (t^ s) o 1 A | 1| t M <00 | 


45 


Download free eBooks at bookboon.com 




















Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


= E,- 


E,- 


9 T, 


<s) - < 9 . 


1 k 




k 


1a 1 


{TW<co} 


(strong Markov property: (2.37)) 


— Ej 


= Et 


E 


x 


(r) 


9 T h 


W 


_ lAl K r) <co} 


— Ej 


E i 


9 T h 


n(s) 


1 A 1 /rp(r-) 


{T«<cc}_ 


9(rf)] 


Ej 1a1| t w < oo | 


(another appeal to the strong Markov property: (2.37)) 


(2.43) 


Ej 


= E,- 


Ej 


E,- 


g(T^ s) ) o^ M l 


« A {rW<®} I 


- Ej p A l| r w <oo ^ 


P,- 


g (V£ ) o # t m) l| T w <00 | | 


E,- 


T fc (r) < oo 


1 A I 7L (r) < oo 


(use (2.36)) 


= E,- 


Ej 


E,- 


9 (h’ +,> - T k ] ) l { Tj><® } I J rj>JJ E j 


L A 


t, w < oo 


0 P 1 / 


->(»•+*) _ 


fc ) 1 {tM<oo}J [ 1a I T k 


ri r) < oo 


(2.44) 


What do you want to do? 



No matter what you want out of your future 
career, ar employer with a broad range of 
operations in a load of countries will always 
e the ticket. Working within the Volvo Group 
means more than 100,000 friends and 
agues in mofe than 185 countries all 
over the world. We offer graduates great 
career opportunities - check out the Career 
section at our web site www.voivogroup.oom. 
look forward to getting to know you! 


VOLVO 


AB Volvo ipublj 

wvrtv.vc*WBP&jp.:«T. 


Vouro Trucks I Renault Trucks I Hack Trucks I Volvo Buses I Volvo Constbuctioh Equipment I Volvo Pekta I Volvo Aero I Voivo IT 
Voivo Financial Services I Volvo 3P I Volvo Powertrain I Volvo Pacts I Vouro Technocogv | Venire Loots tics I Business Area Asia 


Download free eBooks at bookboon.com 












































Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


From (2.44) the Pj-independence of T^ r+ '^— and the cr-field jF' ( r ) conditioned 

k 

on the event j < ooj follows. Since the expressions in (2.42) and (2.43) are 
equal it follows that the P j 


(r) 

T> does not depend on r, provided that 


T fc (1) < oo 


-distribution of the variable T, 


(r+l) 


in < co}\{Ti r) < co} 


By the strong Markov property it follows that 


P; 


t: 


(r+l) 


< GO 


= P j 


T k ] < 00 


Pfc 


= 0. 


T k ] < 00 


(2.45) 


(2.46) 


Since, by assumption, 


T fc (1) < oo 


= 1, (2.46) implies that the probabilities 


P; 


Tl r) < oo 


do not depend on r e N, and hence (2.45) follows. This proves 

□ 


Theorem 2.10. 

2.11. Definition. Let j e S. If Pj 
(or persistent). If Pj 


T (1) < oo 


ri 1} < oo 


recurrent state for which E , 


3 


(1) 


1, then j is called recurrent 
< 1, then j is called a transient state. A 
= go is called a null state. A recurrent state 


for which Ej 


(i) 


< oo is called a non-null or positive state. 


From (2.46) it follows that Pj 


rj r) < oo 

= ^ 

Tj 1} < oo 

ted to be visited infinitely 


, and hence if a 


N k = Xm=i 1{ x n =k } be the number of visits to the state k , and put u k = 
Ej [N k ] = E [N k | A 0 = j]. Then 


< 4 - 


2 = 2 P [W„ = * J Wo = jr] - 

n —1 n —1 

We also have {N k ^ r + l} = jr fc (r+1) < ooj and hence by (2.46) we get 

P j [N k ^r + l]= P j [N k > 0] P fc [N k > Of 


(2.47) 


P, 


T k +l) < 00 


= P, 


T fc (1) < oo 


P k 


T fc (1) < oo 


(2.48) 


From (2.47) and (2.48) it follows that 


^ = Yi Pik = 2 P [X = k I Xo = j] = Fj [N k > 0] 2 P fc [N k > 0] r . 

n—1 n —1 r—1 

(2.49) 

Suppose that the state j communicates with k , i.e. suppose that > 0 for 
some integer n ^ 1. From (2.49) it follows that the state k is recurrent if and 


only if XIn=i Pjk = 00 • Th e state k is transient if and only if Yju=\ P% 


oo (n) 


< 00 . 


47 


Download free eBooks at bookboon.com 


































Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2.12. Theorem. Suppose that the states j and k intercommunicate. Then either 
both states are recurrent or both states are transient. 


PROOF. Since the states j and k intercommunicate the exist positive inte¬ 
gers m and n such that p^ > 0 and pj”' ) > 0. For any positive integer r we 


then have 


(m+r+n) (m) Jr) Jn) 

Pjj ^ Pjk PkkPkj 

too (r) 


(2.50) 

By summing over r in (2.50) we see that pfj < 00 if and only if X!^=i Pkl < 
oo. From this fact together with (2.49) the statement in Theorem 2.12 follows. 

□ 

2.13. Definition. A Markov chain with state space S is called irreducible of all 
states communicate, i.e. for every j, k e S there exists neN such that p^l > 0. 
If X is irreducible and all states and one, and so all states, are recurrent, then 
X is called recurrent. 

2.14. Theorem. Let X be a recurrent and irreducible Markov chain. Put 


(2.51) 


Then 0 < v* < go, j, k e S, and Vj = X \ieS v iPij■ ^ n °^ er words the vector 
(vj : j e S) is an invariant measure for X. 

Proof. First we prove that 0 < v 1 - < go. Therefore we notice that 



r T 1 ( f ) 

1 k 


± k 

II 

2 1 1 x u = i ) \ x o = k 

II 

2 1 i x *-= j } 


£ 

II 


II 


v? = E, 


n(l) 


u —1 


= 2 > 

r=0 


!{ x u =j) 


GO 

- y if S T* 



r=0 


'"'(l) n, rp(l) 
k j 

(p, 

r d) ^ T m 


where we used the equality: 


P k 


t1 1] > t; 


(r+1) 


= Pi- 


X # Tj 


(1) 


Pi 


h 11 > ij 


(i) 


(2.52) 


(2.53) 


Suppose j =f k; for j = k we have = 1. The equality in (2.53) follows from 
the strong Markov property as follows. For r = 0 the equality is clear. For 
r ^ 1 we have 


^k 

= Pt 
= E k 


rpO) > rj-i(r+l) 

k ^ j 


T, (1) ^ T- r+1) , T, (1) ^ T\ r) + 1 


T W + T^ O 7? M ^ T W + rj 1} O 7? ( r ), T fc (1) ^ T W + 1 


3 3 

T, (1) od (r) ^T| 1} od (r) It (r) 


rpiX) \ rp{r) M 

’ 1 k ^ 1 j + 1 


(strong Markov property) 


Ez, 


P 


x 


.(O 


Tl 1] > T\ l) 


, Tl l) > Tj r) + 1 


48 


Download free eBooks at bookboon.com 

































Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


= E k 


p, 


r fc (1) ^ rj 1} 






, r fc (1) ^ rj r) +1 
r fc (1) > if } +1 


(induction with respect to r) 


= IP* 


if } ^ T d) 




r™ J, T m 


(2.54) 


SinceI 


T d) ^ T m 


> 0 it follows by (2.52) that v k > 0. By the same equality 


and using the fact that P,- 


T W > T (i) 


< 1 we see v k < oo. 


Next we prove the equality: v k = 2 ieS v iPv • Therefore we write 



1) 

1 h 

00 


II 

* 

3 

II 

to. 

ii 

M 

?r 

X n = j , T™ > n 


n—1 

n—1 



= £ P, [X n = j, X n _! = *, T fc (1) ^ n 

zeS' n=l 
00 

= X! 2 [p fc [x n = j T n _x], X n _! = i, T fc (1) > 


ieS n—1 

(Markov property) 


n 


ieS n=1 


£ 2 E t Px„_, [X! = j] , A„_, = i, 2* n 


!>[*. = j]£P, 

zeg n=0 

2 Pi [A', -j]E 


X n = i, T, 1 ' - 1 > n 


ieS 




2 Pi [*1 =j]E k 


ieS 


Z 1 {*»=<> 

n=0 


1 k 

Z 1 {*»=<> 

n= 1 


In the last equality of (2.55) we used the equality 



r (!)_ i 

k 


ni(l) 

1 k 

Pfc 

Z 1 i X n=i} 

II 

Z 1 l^n=d 


1 

3 

II 

o 

1_ 


1 

3 

II 

1_ 


(2.55) 


which is evident for i 4= k and both are equal to 1 for i = k. As a consequence 
from (2.55) we see that v k = X \ieS v iPij • D 

2.15. Corollary. Let the row vector v k := (v k : j e S') be as in equality (2.51) 
of Theorem 2.If. Then v k is minimal invariant measure in the sense that if 


49 


Download free eBooks at bookboon.com 






































Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


x = (xj : j e S) is another invariant measure such that #*. = 1. Then xj ^ v*, 
j e S. 


x, 


Proof. We write: 

= Pkj T X s p s j 

seS , s=\=k 


= Pfc 
= Pfc 

> P fc 


Xi = j, r* 11 ;s 1 


A'i - j T'" » 1 


A'i - j A" # 1 


+ 2 2 3'S2Ps2SiPsij 

s±eS, s\^k S2^S 


+ ^ ^S2^525lPsij 


SlG*S, Si =|=/c 


Sie5, Sl=|=/c S2^S, S2^k 


+ P fc [x 2 = j, T fc (1) > 2 ] 


+ • • • + P/c 


X n = j, T, (1) > n 


(2.56) 

Upon letting n tend to oo in (2.56) we see that Xj ^ v k . This proves Corollary 
2.15. □ 



qaiteye 

Challenge the way we run 


EXPERIENCE THE POWER OF 
FULL ENGAGEMENT... 


RUN FASTER. — p 

RUN LONGER.. 

RUN EASIER... > 




50 



Download free eBooks at bookboon.com 














Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2.16. Theorem. Let X be a irreducible Markov chain with transition matrix 

P = (Pij)(i,j)eSxS ■ 


The following assertions hold: 


(a) If any state is non-null recurrent, then all states are. 

(b) The chain is non-null recurrent if and only if there exists a stationary 
distribution it or invariant measure. If this is the case, then 



1 

E k 

1 - 1 

1—1 


and v k = ^ 


(see (2.51)). 


(2.57) 


As a consequence of (2.57) stationary distributions are unique. 


(b). 


Proof, (a) The proof of assertion will follow from the proof of assertion 


(b) Let k be a state which is non-null recurrent. By Theorem 2.12 it follows 
that the chain is recurrent. By Theorem 2.14 the vector ( v k : j e S) as defined 
in (2.51) is an invariant vector. Since k is non-null recurrent it follows that 

1 


0 < Et 


T 


(i) 


< oo, and that the vector 


— : j e S 


with 7Tfc = 


E* 


T, 


(i) 


is a stationary vector. It follows that if the irreducible chain X contains at 
least one non-null recurrent state, then there exists a stationary distribution. 
Next suppose that there exists a stationary distribution n := (tt :] : j e S). Then 

7Tfe = j e s ^fPj'k f° r n e N. Since the chain is irreducible, and the vector is 
a probability vector, at least one 7 Tj 0 =)= 0. By irreducibility there exists n e N 
such that p^l 4= 0, and hence tt/, : A 0, k e S. Consider for any given k e S the 

7T ■ \ 

Then asy. = 1 and by Corollary 2.15 x 


vector x = ( — : j e S 
\Kk 

j e S. It follows that 


3 ^ Vj for all 


E* 


T;‘ l - 


2 >A 

jeS 


y 

jeS nk 


1 

^k 


(2.58) 


Therefore k is non-null recurrent for all k e S. It follows that if there exists 
one non-null recurrent vector k s S, then all states in S are non-null recurrent. 
Altogether this proves assertion (a), and also a large part of (b). From (2.58) 
the first equality in (2.57) follows. Finally we will show the second equality 

in (2.57). The vector x — v fc is invariant and, by Corollary 2.15 

Hence we obtain, for all positive integers n, 


7Ti 


k 


V j > °- 


0 = 1 


ieS 




k 


(n) ^ 

Plk > 


ffl 

Kk 


(■ n ) 

Pjk- 


(2.59) 


51 


Download free eBooks at bookboon.com 











Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


In (2.59) we choose n in such a way that > 0. By irredncibility this is 
possible. It follows that (see (2.51)) 



T7 

1_ 



r T a) i 

1 k 

II 

II 

>< 

_1 

o* 

II 

i_ 

II 

-1 

II 

* 

_1 


7To 


7T k 



i—i 

Ej 

1-1 

i—i 

Vr 


(2.60) 


The second equality in (2.57) is the same as (2.60). This concludes the proof of 
Theorem 2.16. □ 


Let k be a non-null recurrent state, and suppose that the state is j intercom¬ 
municates with k. Then both states are non-null or positive recurrent. Next we 
define the renewal process N k {n ), n e N, as follows: 

N k {n ) = max jr : < n j . (2-61) 

Notice the inequalities: 

rp(Nk{n)) <- ^ ^ rp{Nk{n+\)) 

and hence T^ n> = n if rn = N k (n). We are interested in the following type of 
limits: 


H 71 H 71 

lim - V p$ = lim - V P [X t = j \ X 0 = k] 

n —>oo Ti j J ra —m L 1 J 

1 U 


lim E 

n —>oo 


L {Xi=j} | ^-0 


Xn = k 


e=i 


Put m = N k (n). Then = n, and consequently we see 


1 n 

iVl 

n h 




Nk(n ) 1 
n m 


T 

771 


( u ) 


2 E i 

U-lg =T (u~l ) + 1 




Notice that for j = k we have 

^ rn - k 


n(u) 


U ~1 + 1 


(2.62) 


(2.63) 


and consequently, 


1 ^ . N k (n) 


n 


n 

t= 1 




n 


Hence, we observe that (see Theorem 2.6) 


1 n 

lim - V 1 

71—>00 Ti 


=r\ = lim 


N k (n) 


{X e =k} 


e=i 


n—>co n 


E 


h 11 1 x„ = k 


—, Pfc-almost surely. 
k'k 


(2.64) 


52 


Download free eBooks at bookboon.com 



















Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


We also see that 

N, in) _ 1 


T 

m 


,(u) 


n 


_ f v i _ Nkij}) \ y v i 

n Zj n m 2-1 2-1 


e=i 


n m 


{Xe=j}- 


(2.65) 


u-i^ =T ^-i) + i 


From the strong law of large numbers we get: 


p( u ) 


-y Ui k 

lim — V J] 1{ x t =j} = E 

oo 771 ^ ^ 1 ; 


U-l i=T (u 1) + 1 


T-i(l) 


2 I = k 


lt=l 


= 1L- 


( 2 . 66 ) 


From (2.63), (2.64), and (2.65) we obtain: 


1 


E 


lim 


Nj(n) 


I = j\ 

Nk(n) 1 ^ ^ ^ 


n—XX) n 

(«) 


1 n 

lim - V l {x ,=i} 

n^oo Tl 


£=1 


T 

m 1 k 


= lim 


>oo n m 


{x e =j} ~ 


u ~l e=T (u-l) + 1 


E 


h 11 Uo - * 


(2.67) 


The equality in (2.67) together with Theorem 2.9 shows the following theorem. 

2.17. Theorem. Let the sequence of stopping times (rff^ be defined as in 

(2.35), and let v( be defined as in (2.66). Suppose that the states j and k 
intercommunicate and that one of them is non-null recurrent, then the other is 
also non-null recurrent. Moreover, 

y n i „,k 

lim - V 1 {Xe =j} 

n —>oo n 


£=1 

n 


E 


[*f 


X 0 = J 


E 


I x 0 = k 


and (2.68) 


-iii -lit 

lim - Vpg = lim - VP [X e =j \X 0 = k] 

71—» 00 Tl j n .—>CD T) L 1 J 


£= 1 


oo n 


£= 1 


E 


t\ 1) \Xo = j 


i] e[ 


t' 1 ' \X 0 = k 


(2.69) 


Hence, with i t,- = lim — V* pf], U, = E Tj 1 -* I X 0 = ? , and v% as in equality 

J n. — >CT) Tl 4—L J J I J 


( 1 ) 


n—>cc n 


t=\ 


/ > 1 v i 

(2.66) we have Ttj = — = —. 

hj l J k 


1.1.1. Random walks. In this example the state space is Z, and the process 
X n , n e N, has a transition probability matrix with the following entries: p,-ij = 
q, Pi+i,i = P, 0 < p = 1 — q < 1, and p h , = 0, j # i + 1. Such a random walk 
can be realized by putting X n = V" =f) Sk, where So is the initial state (which 
may be random), the variables Sk, k e N, k 5= 1, are P-independent of each 
other and are also P-independent of Sq. Moreover, each variable Sk, k ^ 1, is 
a Bernoulli variable taking the value +1 with probability p and the value — 1 
with probability q. This Markov chain is irreducible: every state communicates 


53 


Download free eBooks at bookboon.com 
























Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


with every other one. The set of states is closed. The corresponding infinite 
transition matrix looks as follows: 

/, 


q 0 p 0 

0 q 0 p 

0 0 q 0 


• 7 

(2 n) 

Poo = 


The state 0 has period two Pqq^ = 0, and 
check transiency (or recurrence) we need to calculate 

00 QC / 

2 >£ n) = 2 ( 

71 = 0 71 = 0 k 


2 n 


n 


p n q n . In order to 


^2 n 


p n q n . 


(2.70) 


By Stirling’s formula we have n\ ~ \Thvnn n e n , which means that 

n\ 


lim —j =— 
n—>G0 v27rnn n e _ 


= 1 . 


This e-book 
is made with 

SetaPDF 


QO 



SETASIGN 




PDF components for PHP developers 


www.setasign.com 


54 

Download free eBooks at bookboon.com 




Click on the ad to read more 










Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


Since 

/2 n\ (2 n)\ \f\n to (2 n) 2n e~ 2n 4 n 

\n J (n!) 2 2nn 2n+1 e~ 2n 

the sum in (2.70) is finite if and only if the sum 


(2.71) 


f ( 4 pgf 

i \/nn 
n —1 v 


(2.72) 


If p = 1 - ? / then 4pg < 1, and hence the sum in (2.72) is finite, and 
so the unrestricted asymmetric random walk in Z is transient. However, if 
p = q = |, then 4 pq = 1 and the sum in (2.72) diverges, and so the symmetric 
unrestricted random walk in Z is recurrent. One may also do similar calculations 
for symmetric random walks in Z 2 , Z 3 , and Z d , d ^ 4. It turns out that in Z 2 the 

2n-th symmetric transition probability satisfies (for n —► go) — —, 

7T7T 

and hence the sum Xin=o dor”' 1 = oo. It follows that the symmetric random walk 
in Z 2 is recurrent. The corresponding return probability p () 2 ( "' for the symmetric 
random walk in Z 3 possesses the following asymptotic behavior: 

(2 n) 1 / 3 \ 3/2 1 

P °° ~2{n) ^ 


Hence the sum XmLo^oo^ < 00 , and so the state 0 is transient. The 2n-th return 
probabilities of the symmetric random walk in Z d satisfies 

(2 n) Cd 

Poo n ^ G0 > 

for some constant c a and hence the sum X^o-Poo^ < 00 in dimensions d ^ 3. 
So in dimensions d ^ 3 the symmetric random walk is transient, and in the 
dimension d = 1,2, the symmetric random walk is recurrent. 


We come back to the one-dimensional situation, and we reconsider the return 


times to a state k e Z: = inf {n 5= 1 : X n = k}. Notice that Sk, k e N, 

the step sizes which are +1. Also observe that Xj,. = Xj =Q Sj . We consider the 


are 


moment generating function Gj : k(s) 


Ej 


n(l) 


S k 


, 0 ^ s < 1. Observe that on 


the event jp 1 ' 1 = ooj the quantity s T k ’ has to be interpreted as 0. In addition, 


we have Fj 
hi) 


7^(1) ^ 7^(1) 
1 k ^ 1 k-1 


that Ty = T ( jp l + T^ L> o $ (i) , Pj-almost surely, k > j, k, j e Z. Then by the 


hi) 


h-1 < °° 


for k > j, k, j e Z. Then it follows 


L k — 1 


strong Markov property we get 


Gj,k{s) — E ? 


p(l) 


s k 


= E,- 


T k-i+ T k 1)o °ui) 


- 1 . AT < oo 


= E.- 


n(l) 




r(l) 


od 


( 1 ) 


S^u) 


. hk < <» 


55 


Download free eBooks at bookboon.com 























Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


(Markov property) 


Ej 


— Ej 


n(l) 


s fc -!Ex 


( 1 ) 


T (l) 
S k 


, h-’i < CO 


rp(l) 

s T *-' E fc _! 


S k 


= E,- 


rri(l) 

S k ~l 




rpiX) 
S k 


From (2.73) we see by induction with respect to k that 


fc-i 


Gj,k( s ) - k[Gt/+i( s ) - G' 0 ,i(s) fc j . 
e=j 


(2.73) 


(2.74) 


In the final step of (2.74) we used the fact that the Prdistribution of is 

the same as the Po-distribution of T \ 1J . This follows from the fact that the 
variables S m , are independent identically (Bernoulli) distributed 

random variables. Notice that = 1 + Tp o 0 1 , P 0 -almost surely. Here 
Tp = inf {n ^ 0 : X n = 1}. Again we use the Markov property to obtain: 


Gu,i(s) — 

= sE 0 
= sE 0 
= sE 0 
do) 


(i) 


.sT[ 

Eo 

Exi 

Ei 


= En 




X* I S, 


= sEn 


E 


rc(O) 


p(0) 


, x x = 1] 
, V = i 


+ sEr 


+ sEn 


r, [ S T ' ,W ]] 

[Ea, [4” r 


, V 




(notice that T 1 lu; = 0 Pi-almost surely, and P_i-almost surely) 


= sp + s(?E_i 


a 1 ! 


(i) 


= sp + sqG-i : i(s) 


= sp + sqG 0t2 (s) = sp + sqG 0t i(s) 2 . (2.75) 

In the final step of (2.75) we employed (2.74) with j = — 1 and k = 1. From 
(2.75) we infer 

1 — (1 — 4 pqs 2 ) 1 ^ 


Go As) = 


2 qs 


By a similar token (i.e. by interchanging p and q) we also get 

1 — (1 — 4 pqs 2 ) 1 ^ 2 


Gi,o(s) — 


2 ps 


(2.76) 


(2.77) 


Next we rewrite Go,o( s ) : 

fjo,o(s) = Eq 


r rA) 

s 1q 


En 




sEn 


En 


s T o 0) ^ I 3l 


SEn 


E 


'Xi 


n(l) 


(on the event {X\ = +1} the equality = Tq 1 ^ holds Pxi-almost surely) 
= sE 0 


® X1 

CC 

, Xi = 1 

+ sEq 

® X1 

rp(A) 

s 1q 

1-1 

T—1 

1 

II 

1—1 


56 


Download free eBooks at bookboon.com 



























































Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


= sE 0 [e, [s r o (1) ] , X 1 * l] + sE 0 [e_! [ 5 t o (1) ] , X 1 = -l] 
= spEi } j + sgE_i } j 
((space) translation invariance) 

= spEi [s T o + sqE 0 [s T l = spG 1>0 (s) + sqG 0>1 (s) 
(employ the equalities in (2.76) and (2.77)) 


1 — (1 — Apqs 2 )^ 2 1 — (1 — A'pqs 2 ) 1 ' 2 

= sp 2 7s + sq 2^8 

= 1 — (l — 4 pqs 2 y^ 2 . 

Then we infer 

P 0 |r 0 (1) < col = lim G 00 (s) = 1 - (1 - 4pg) 1/2 

L J stl,s<l 

= 1 - |1 - 2p\ = 1 - \q-p\ • 


As a consequence we see that the non-symmetric random walk, i.e. the one with 
q ¥= p, is transient, and that the symmetric random walk (i.e. p = q = |) is 
recurrent. However, since 


E 0 



lim. G'q o (s) 


lim 


sn,*<l (1 - 5 2 ^ 3/2 


= 00 , 


it follows that the symmetric random walk is not positive recurrent. 



www.sylvama.com 


We do not reinvent 
the wheel we reinvent 
light. 


Fascinating lighting offers an infinite spectrum of 
possibilities: Innovative technologies and new 
markets provide both opportunities and challenges. 
An environment in which your expertise is in high 
demand. Enjoy the supportive working atmosphere 
within our global group and benefit from international 
career paths. Implement sustainable ideas in close 
cooperation with other specialists and contribute to 
influencing our future. Come and join us in reinventing 
light every day. 


OSRAM 

SYLVAN!A 


Light is OSRAM 


57 

Download free eBooks at bookboon.com 











Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


In the following lemma we prove some of the relevant equalities concerning 
stopping times and one-dimensional random walks. 

2.18. Lemma. Employing the above notation and hypotheses yields the following 
equalities: 

(i) The equality = 1 + T^ o holds F^-almost surely for all states k, 
j e Z. 

(ii) Fork > j, k, j e Z the equality T^ = T^+T^ o$ ( i) holds Fj-almost 

k — 1 

surely. 


PROOF. First let us prove assertion (i). Let j and k be states in Z. Then 
Pfc-almost surely we have 

1 + Tj 0 ' 1 o tb = 1 + inf {n ^ 0 : X n od\ = j } 

= inf {n + 1 : n > 0, X n+1 = j} = inf {n > lX n = j} = TO. (2.78) 

The equality in (2.78) shows assertion (i). Next we prove the somewhat more 
difficult equality in (ii). As remarked above we have 


P,- 


tA) T 11 
1 k ^ ± k —1 


dO 


= p, 


h-’, < °° 


Indeed, in order to visit the state k > j the process X n , starting 


to visit the state A: — 1, and hence P, 


dO 

-k -1 


di) 


P,- 


dd < ® 


rom j has 
. Without 


loss of generality we may and shall assume that in the following arguments 
we consider the process X n on the event jx^ < ooj. (Otherwise we would 

automatically have T X l + T^ 
we write: 


di) , =Ti 1} . 


Next on the event 


{t^ < co} 


+ t£ ] o tf^p = T fc ( i\ + inf jn ^ 1 : X n o # T (p = A:} 
= inf + n : n> l, X n+T £\ = k } 


= inf | 


n > T£\ : X % 


= k} = 


t: 


(i) 


Assertion (ii) follows from (2.79). Altogether this proves Lemma 2.18. 


(2.79) 

□ 


Perhaps it is useful to prove in an explicit manner that the one-dimensional 
random walk is a Markov chain. This is the content of the next lemma. 

2.19. Lemma. Let {A n : neN,n>l} be independent identically distributed 
random variables taking their values in Z. Put X 0 = So be the initial value 
of the process X n , neN, where X n = 2m=o $ m ■ Xwi 

E, [/ (X 0 , X 1 ,...,X n )] = E[f (X 0 + j, X, d j .X„ f j )] (2.80) 

for all bounded functions f : Z n+1 —» M, j e Z, neN. Then the equality in 
(2.80) expresses the fact that the process 

|(12, S, P j) jeZ , (X n , neN), (d n , n e N), Z j 


58 


Download free eBooks at bookboon.com 










Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


is a space homogeneous process, with the property that the distribution of the 
process {X n +\ — X n ) neN d° es n °t depend on the initial value j. It is also a 
time-homogeneous Markov chain. 

PROOF. The first equality say that the finite dimensional distributions of 
the process (X n ) ngN are homogeneous in space. If 

f Oo, ■Fl? • • • i *Fn) Q (^1 *Fo i %2 ^1? • * • i ^ J n —l) j 

then from (2.80) we see that 

E j [f(X 0 ,X 1 ,...,X n )]=E[f(X 0 + j,X 1 +j,...,X n + j)] 

= E[p (X 1 -X 0 ,X 2 -X 1 ,...,X n -X n _ 1 )] 

= Ej [g (. X ! -X 0 ,X 2 -X 1 ,...,X n - X n _i)]. (2.81) 

The equality in (2.81) shows that the distribution of the process 

(X n+ \ — X n ) ne pj 

does not depend on the initial value j. Next we prove the Markov property. Let 
/ be any bounded function on Z. To this end we consider: 

Ej [f (X n+1 ) | S„] = E [f (X n+1 + j ) | S„] 

= E [/ (X n+ i — X n + X n + j) | S n ] 

(the variable X n+ i — X n and the cr-field Sn are IP-independent) 

= u,~E[u'~f (X n+1 (u') - X n (a/) + X n (cn) + j)] 

(the P-distribution of X n+ i (u>')—X n (oj r ) neither depends on n nor on the initial 
value Xq (a/)) 

= u~E[u/~f(X 1 (u/) - X 0 (u') + X n (u) + j )] 

(choose X 0 (a;') = j) 

= u~E[oj’~f(X 1 (X)-j + X n (oj) + j)] 

= uj~E[u'~f(X 1 (X)+X n (u J ))] 

= W « E x „ ( „, [V « / (V (V))] = E Xn If (V)] . (2.82) 

The equality in (2.82) proves Lemma 2.19. □ 

2.20. Remark. It would have been sufficient to take / of the form / = 1&, 
k 6 Z. 

2.21. Remark. Lemma 2.19 proves more than just the Markov property for a 
random walk. It only uses the fact that the increments S n are identically P- 
distributed and independent, and that the process (AT n ) neN possesses the same 
Pj-distribution as the P-distribution of the process (X n + j) n6N , j e Z. The 
proof only simplifies a little bit if one uses the random walk properties in an 
explicit manner. 


59 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


1.1.2. Some remarks. From a historic point of view the references [6, 47, 
151] are quite relevant. The references [10, 43] are relatively speaking good 
accessible. The reference [153] gives a detailed treatment of martingale theory. 
Citations, like [54, 140, 142, 145] establish a precise relationship between 
Feller operators, Markov processes, and solutions to the martingale problem. 
The references [49, 50] establish a relationship between hedging strategies (in 
mathematical finance) and (backward) stochastic differential equations. 



Deloitte 




Discover the truth at www.deloitte.ca/careers 


© Deloitte & Touche LLP and affiliated entities. 



Download free eBooks at bookboon.com 



Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2. Some additional comments on Markov processes 


In this section we will discuss some topics related to Markov chains and Markov 
processes. We consider a quadruple 

m p), (X n , n e N), (# n , n e N), (S, S)}. (2.83) 

In (2.83) the triple (Q, £F, P) stands for a probability space, fl is called the 
“sample space”, £F is a a -field on fl. and P is a probability measure on T. The 
cr-field T is called the cr-field of events. The symbol (S, S) stands for the state 
space of our process (X n : n e N). In the present situation the state space S is 
discrete and countable with the discrete cr-field S. 


Let (Q, T, P) be a probability space and let Xj : fl —> S, j e N = {0,1, 2,...}, 
be state variables. It is assumed that the cr-field T is generated by the state 
variables Xj , j e N. Let d k : Tl —*• Tl, k e N, be time shift operators, which are 
also called time translation operators: Xj of} k = X ]+k , j , k e N. For a bounded 
ex (Xj : j e M)-measurable stochastic variable F : —*■ M and x e S we write 


E x [F] = E [F | X 0 = x] 


E [F, X 0 = a:] 
P [X 0 = x] 


We also write 

Tx - y “ P [x 0 ’=l] X] = Pl[A '‘ = y\= v X = y\ x ° = x \- 

Here x has the interpretation of state at time 0, and y is the state at time 
1. Let Sn, n e be the internal memory up to the moment n. Hence Sn = 
o (Xj : 0 < j n). 


2.22. Theorem. Suppose that (X n :neN) is a stochastic process with values 
in a discrete countable state space S with the discrete a-field S. The state 
variables X n , n e N, are defined on a probability space (f2, T, P). Write, as 
above, T XyV = P [X 1 = y \ X 0 = x\, x, y e S. Then the following assertions are 
equivalent: 


(1) For all finite sequences of states (sq, , s n+ i) in S the following iden¬ 
tity holds: 

P (X n+ i = s n+ 1 | X 0 = s 0 , •. ■ , X n = s n ) = T SntSn+1 - (2.84) 

(2) For all bounded functions f : S —> E and for all times n e N the 
following equality holds: 

E [/ (X n+ i) | Sn] = [/ (V)] P -almost surely, (2.85) 

(3) For all bounded functions / 0 ,.... f k op S and for all times neN the 
following equality holds P -almost surely: 

E [/o(X n )/ 1 (X re+1 )... f k (X n+k ) | Sn] = [/o(X 0 )/ 1 (X 1 )... f k (X k )] ; (2.86) 

(4) For all bounded measurable functions F : (Q, T) —>■ M (stochastic vari¬ 
ables) and for all n e N the following identity holds: 

E[Fod n \S n ]=E Xn [F] P -almost surely ; (2.87) 


61 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


(5) For all bounded functions f : S —► E and for all (Sn) n6 N -stopping times 
t : 12 — > [0, co] the following equality holds: 

E [/(X r+ 1 ) | S T ] = Ex t [/(Xi)] F-almost surely on the event {r < go} ; 

( 2 . 88 ) 

(6) For all bounded measurable functions F : (Cl, £F) —> E (random vari¬ 
ables) and for all stopping times r the following identity holds: 

E [F o | S r ] = E Xt [F] P -almost surely on the event {r < oo} . (2.89) 


Before we prove Theorem 2.22 we make some remarks and give some explana¬ 
tion. 


2.23. Remark. Let Cl = 5 ,H . equipped with the product cr-field, and let X :] : 
Cl —*• S be defined by Xj(cu) = ojj where u = (coo, ..., Wj ,...) belongs to Cl. If 
dk : Cl —*• Cl is defined by dk (coo,..., Uj ,...) = (u>k, ■ ■ ■, ojj+k, ...), then it follows 
that Xj o dk = Xj + k . 

2.24. Remark. Instead of one probability space (12,T, P) we often consider a 
family of probability spaces (0. T, P x ) x6iS .. The probabilities P x , x e S, are 
determined by 

E, [F] = E [F | X„ = x] = E p^° = ° j" 1 , (2-90) 

Here F : Cl —*• E is T-lBp-nieasurable, and hence by definition it is a random or 
stochastic variable. Since P x [A] = E x [lu], dej, and P x [12] = E x [1] = 1, the 
measure P x is a probability measure on ‘J. 


2.25. Remark. Let F : Cl —> E be a bounded stochastic variable. The variable 
E\- n [F] is a stochastic variable which is measurable with respect to the cr-field 
a (X n ), i.e. the cr-field generated by X n . In fact we have 


E XnH [F] = E [F | X 0 = X n (co)] = 
= E [a/ i—> F (a/) x l{x 0 


E[F, X 0 = X n (uj)\ 
P [X 0 = X n (uj)\ 

=X n (uj)} (<F)] • 


(2.91) 


If we fix u e 12, then in (2.91) everything is determined, and there should be no 
ambiguity any more. 


2.26. Remark. Fix neN. The cr-field S n is generated by events of the form 
{ (Xq, X\ , . . . , -X n ) (s0; Si j • • • ) S n )} . 

Here (so> si, • • •, s n ) varies over S n+ 1 . It follows that 

9n = cr{{(X 0 ,X 1 ,...,X n ) = (s 0 ,si,...,a;„)} : (s 0 , s u ..., s n ) e 
= {{(X 0 ,X 1 ,...,X n )eR}: 

= cr(X 0 ,X i,...,X n ). (2.92) 

The cr-field in (2.92) is the smallest cr-field rendering all state variables Xj, 
0 ^ j < n, measurable. It is noticed that S n c T. 


62 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2.27. Remark. Next we discuss conditional expectations. Again let A : 12 —> M. 
be a bounded stochastic variable. If we write Z = E [F | S„], then we mean 
the following: 

(1) The stochastic variable Z is Sn-®R- m easurable. This a qualitative as¬ 
pect of the notion of conditional expectation. 

(2) The stochastic variable Z possesses the property that E [F, A] = E [Z. A] 
for all events A e £j n . This is the quantitative aspect of the notion of 
conditional expectation. 

Notice that the property in (2) is equivalent to the following one: the stochastic 
variable Z satisfies the equality E [F | A] = E \Z | A] for all events A e S„. 

2.28. Remark. Let S be a sub-cr-field of T. The mapping F •—» E [F | S] is an 
orthogonal projection from L 2 (Q, T, P) onto L 2 (Q, 3, P). Let F e L 2 (fl, T, P), 
and put Z = E [F | 3], In fact we have to verify the following conditions: 

(1) Z e L 2 (f2, S, P); 

(2) If G e L 2 (fl, g,P), then the following inequality is satisfied: 

E[|F-F| 2 ] < E [|F — G\ 2 ] . 

This claim is left as an exercise for the reader. For more details on conditional 
expectations see Section 1 in Chapter 1. 

2.29. Remark. Next we will give an explicit formula for the conditional ex¬ 
pectation in the setting of a enumerable discrete state space S. Let Sn = 
a (Xq,X±, ... ,X n ) where Xj : Q —> S, 0 ^ j < n, are state variables with a 
discrete countable state space S. In addition, let F : fl —> M be a bounded 
stochastic variable. Then we have 

E [F | Sn] = ^ E [F | X 0 = i 0 ,... ,X n = i n ] l{x 0 =i 0 } • • • 1 {x n =i n }- (2.93) 

Writing the conditional expectation in (2.93) in an explicit manner as a function 
of uj yields 

E [F | Sj ( W ) 

= ^ E [F | X 0 = i 0 ,..., X n = i n ] l{x 0 =j 0 }(w) • • • l{x n =i n }(u). (2.94) 

From (2.93) and also (2.94) it is clear that the conditional expectation 

E [F | gj is S n -measurable. 

2.30. Remark. Put S = Soo = cr (A”o,..., X n ,...) = a (X) where X : ^ S N 

is the variable defined by X (cu) = (Xo(cij),..., X n (u >),...), u> e fl. Then 

S = Soo (2.95) 

= {{AeF}: Fis measurable with respect to the product a-field on ,S N } . 

2.31. Remark. Let r : fl —> N u {cxd} be a random variable. This random 
variable is called a stopping time relative the filtration (S )t ) rt6F [, or, more briefly, 


63 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


r is called a (S n ) ne N _s ^°PPi n S time, provided that for every k e N an event of 
the form {r < k) is Sat measurable. The latter property is equivalent to the 
following one. For every k e N the event {r = k } is Sat measurable. Note that 
{t = k} = {t ^ k) \ {t < k — 1}, k e N, k ^ 1 , and {r < k) = u j =0 {t = j). 
From these equalities it follows that r is a (S n ) ner! -stopping time if and only if 
for every k e N the event {r = k} is Sat measurable. 

2.32. Remark. Let B be a subset of S. Important examples of stopping times 
are 

t r = inf {k ^ 0 : e B } on u* =0 { X ^ e B } and oo elsewhere; 

t r = inf [k ^ 1 : Ah, e B } on u* =1 { X e B } and oo elsewhere. (2.96) 

Similarly we also write = inf {k ^ s : X}. e B} on the event u* =s {X}. e B\, 
and Tg = oo elsewhere. The time t r is called the hrst income time, and is 
called the hrst hitting time, or the hrst income time after 0. 

We also notice that = min {A; ^ 1 : I^eB} on u“ =1 { X\ e Bj and = oo 
on r\f =1 {Xg e S\B}. In addition: 1 + o •d 1 = (} l R . 

2.33. Remark. Again let r : fl —> N u {oo} be a (Sn) n6 Fr s t°PPi n g time. The 
cr-held St containing the information from the past of r is dehned by 

St = {A e T : A n {t ^ k} e Sfc} 

= a(X JAT : je N) (2.97) 

where X jAT (uj) = X jATiuj) (uj), u e Q. 


SIMPLY CLEVER 


SKODA 



We will turn your CV into 
an opportunity of a lifetime 


Do you like cars? Would you like to be a part of a successful brand? 
We will appreciate and reward both your enthusiasm and talent. 
Send us your CV. You will be surprised where it can take you. 


64 


Send us your CV on 

www.employerforlife.com 



Download free eBooks at bookboon.com 











Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2.34. Remark. Let F be a stochastic variable. What is meant by F o $ k and 
F o d T on the event {r < go}? Here k e N, and r is a (S n ) n6N -stopping time. 
For F = n; =0 fj (^j) we write: 

n n 

f ° = n u (A) ° a= n h (*>+*). 

3=0 3=0 

and on the event {r < go} 


F ° «r = n Si (A) ° «T = n fi ( X > + • 

j =o j =o 


2.35. Proposition. Let 


(ril 1 ) 


(2.98) 

be a sequence of square matrices with 

(x,y)eSxS 

positive entries, possibly with infinite countably many entries (when S is count¬ 
able, not finite). Put 

n+1 

7£„ -T‘„ = 7$, T”„- 2 | | / .... s n+i = V- (2.99) 

si,...,s n e5 i=l 

The equalities in (2.99) are to be considered as matrix multiplications. Fix 
1 ^ n\ < ■ ■ ■ < nk ^ n, and let the measure space (S k , (x)^ =1 S, l\li,...,n k ,n+i,y)> 
(x,y)eSxS,neN be determined by the equalities: 


r . n 

n fj ( S i) d 9°nZ,n k ,n+l, Sn+1 (« 1 , ■ • • , «fc) 
7=1 


k+1 


- S nbhWLfe), /,EP(S,S), 

(• si,...,sp)eS k j=l 


( 2 . 100 ) 


where (so>Sn+i) = (x,y) e S x S, no = 0, and nk+i = n + 1. Then /or ever?/ 
1 ^ Jo ^ R, and fj e L 00 (R, £), 1 < / ^ n, this family satisfies the following 
equality: 

r> n 

fi fj ( S i) ^R.°,n,n+l, Sn+1 (si, • • • , S n ) (2-101) 

J * n i=l 

P n 

= I ]^I fj ( S j) dpfj n+1 (si, • • • , Sjo-l, s jo+l) • • • > S n) T's ; ,- 0 _ 1 ,s jo+1 

' j=l,jVjo 

where Tf y = 2 zeS^^Tz^y for all (x,y) e S x S (matrix multiplication). Let 
1 < ni < • • • < n k < n, and put 

Tnl..,n k ,n +1 (#0 X R) = 1 B „ (^hn’L.^n+l,,/ (#) , B 0 E§, Be ® k §. 

yeS 

( 2 . 102 ) 

Suppose that the matrices ( Tx() 


, n 6 N ; are stochastic. Then the 

(x,y)eS x S' 

measures in (2.102) do not depend on n+1. Moreover, the following assertions 
are equivalent: 


65 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


(a) The family of measure spaces 

{(S fc+1 ,® fc+1 S,/z°’ 1 a ;... >nfc>n+1 ) : 1 n x < ••• < n k < n, n e N} (2.103) 

is a consistent family of probability measure spaces. 

(b) The family of measure spaces defined in (2.100) is consistent. 

(c) For every neff and (x,y) e S x S the equality T x )jJ = Tf y holds. 


Suppose that the family in (2.103) is a consistent family of probability spaces. 
Then the corresponding process 

{(tl,?,F x ) xeS ,(X n : neN), (d n , n e N), (S, S)} . (2.104) 

is a Markov chain if and only if for (x,y) e S x S and n, me N the following 
matrix multiplication equality holds: 


This means that 


rji(n-\-m) \ 1 rp(ri) m(m) \ 1 rpn rjim 

1 x,y / j 1 x,z 1 z,y / j 1 x^z 1 z,y 

zeS zeS 


(2.105) 


P’s [X 0 e B 0 ,..., X n e B n \ = 1 -b 0 {x) iTf* ^ n+l (Bi x • • • x B n ) , 

for Bj e §, 0 < j < n, and that the family in (2.104) possesses the Markov 
property if and only (2.105) holds. 

In addition, we have T x n y = P x [. X n = y], x, y e S; i.e. the quantities T x n y 
represent the n time step transition probabilities from the state x to the state y. 


2.36. Theorem. Let the notation be as in Theorem 2.22. The following asser¬ 
tions are equivalent: 


(1) For every s e S, for every bounded function f : S —> R, and for all 
n 6 N the following equality holds F s -almost surely: 

E. [/ (X.+i) | Sn] - E x „ [/ (X,)]. (2.106) 

(2) For every bounded function f : S —* M, and for all n 6 N the following 
equality holds P -almost surely: 

E [/ (X n+1 ) | Sn] = Ex, [/ (X x )]. (2.107) 

2.37. Remark. From the proof it follows that in Theorem 2.36 we may replace 
the stochastic variable / (Xi) by any bounded stochastic variable Y : O —*• R. 
At the same / (X n+ i) = / (Xi) o D n has to be replaced by Y o d n . 

2.38. Remark. Theorem 2.36 together with Remark 2.37 shows that through¬ 
out in Theorem 2.22 we may replace the probability P with P s for any s e S. 
Consequently, we could have defined a time-homogeneous Markov chain as a 
quadruple 

m P s ) seS , (X„ : n 6 N), (0 fc , k e N), (S, S)} 

satisfying the equivalent conditions in Theorem 2.22 with P 5J for all s e E, 
instead of P. 


66 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


Proof of Theorem 2.36. (1) => (2) Let s e S, f : S — M a bounded 
function, and neN. From (2.106) we infer 

E s [f (X n+1 ) ,A]=E a [E Xn [f (X 1 )], A] for all A e S n . (2.108) 

From (2.108) we infer 

E [f (X n+l ),A,X 0 = s\ 

= E [E Xn [/ (X ra+ i)], A, X 0 = s] for all A e $ n , and seS. (2.109) 
By summing over s e S in (2.109) we obtain 

E[f(X n+1 ),A] = E[E Xn [f(X n+1 )],A] for all ^4 e Sn- (2.110) 

From (2.110) the equality in (2.107) easily follows. 

(2) => (1) Let / : S —> K. and neNbe such that (2.107) holds. Then, since all 
events of the form {X 0 = s}, s e S, belong to Sn, (2.107) implies that (2.109) 
holds for / and hence by dividing by P [X 0 = s], for s e S, we obtain (2.108). 
Hence (2.106) follows. 

All this completes the proof of Theorem 2.36. □ 




MAERSK 


I joined MITAS because 
I wanted real responsibility 


The Graduate Programme 
for Engineers and Geoscientists 

www.discovermitas.com 


Real work 
International opportunities 
Three work placements 




a 


I was a construction 
supervisor in 
the North Sea 
advising and 
helping foremen 
solve problems 



Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


The following theorem is similar to the formulation of Theorem 2.36 but now 
with stopping times and having remark 2.37 taken into account: 

2.39. Theorem. Let the notation be as in Theorem 2.22, and letr:Cl^> [0, go] 
be a (5n) ne ^- s topping time. The following assertions are equivalent: 

(1) For every s e S, and for every bounded stochastic variable Y : Cl —> K. 
the following equality holds P s -almost surely on the event {t < oo}; 

E a [Yot? r |g r ]=E* T [y]. (2.iii) 

(2) For every bounded stochastic variable Y : Cl —> R. the following equality 
holds P -almost surely on the event {r < go}; 

E[hotf r | S T ]=E Xr [Y]. (2.112) 

2.40. Remark. Let r : Cl —» Nu{oo} be a (S n ) neN -stopping time, and let A e S r . 
Let m e N u {go}. Put r m = mla\A + rl a- Then r m is a (S n ) neN -stopping time. 
If P[A] < 1 and m = go, then r m = oo on the event Ct\A which is non-negligible. 

2.41. Definition. Let Cl be a set and let S be a collection of subsets of Cl. Then 
S is called a Dynkin system, if it has the following properties: 

(a) Cl e S; 

(b) if A and B belong to S and if A => B, then A\B belongs to S; 

(c) if (A n : n e N) is an increasing sequence of elements of S, then the union 
Un=i A n belongs to S. 

In the literature Dynkin systems are also called A-systems: see e.g. [3]. A 
7r-system is a collection of subsets which is closed under finite intersections. A 
Dynkin system which is also a 7r-system is a a-field. The following result on 
Dynkin systems, known as the 7T-A theorem, gives a stronger result. 

2.42. Theorem. Let M be a collection of subsets of Cl, which is stable under 
finite intersections, so that M is a ir-system on Cl. The Dynkin system generated 
by M coincides with the a-field generated by M. 

PROOF. Let T) (M) be the smallest Dynkin-system containing M, i.e. D (M) 
is the Dynkin-system generated by M. For all A e T) (M), we define: 

T{A) := {B e V (M) : AnBeV (M)}. 

then we have 

(1) if A belongs to M, M c r(A), 

(2) for all A e M, T(A) is a Dynkin system on Cl. 

(3) if A belongs to M, then T> (M) c= T(A), 

(4) if B belongs to CD (M), then OVf c T(B), 

(5) for all B e D (M) the inclusion, D (OVf) c T(B) holds. 

It follows that CD (M) is also a 7 r-system. It is esay to see that a Dynkin system 
which is at the same time a 7r-system is in fact a a-field (or cr-algebra). This 
completes the proof of Theorem 2.42. □ 


68 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2.43. Theorem. Let Cl be a set and let M be a collection of subsets of it, which 
is stable (or closed) under finite intersections. Let IK be a vector space of real 
valued functions on Cl satisfying: 

(i) The constant function 1 belongs to K and 1 a belongs to K for all 
A e M; 

(ii) if (f n : n e N) is an increasing sequence of non-negative functions in K 
such that f = sup neN f n is finite (bounded), then f belongs to K. 

Then K contains all real valued functions (bounded) functions on that are 
ct(M) measurable. 

Proof. Put T) = [A ^ Cl ; 1^ e K}. Then by (i) Cl belongs to T) and 
T) ^ M. If A and B are in D and if B ^ A, then B\A belongs to D. If 
( A n :neN) is an increasing sequence in T>, then 1 u a„ = sup n 1 a„ belongs to 
T) by (ii). Hence T) is a Dynkin system, that contains M. Since M is closed 
under finite intersection, it follows by Theorem 2.42 that T> 3 <r(M). If / ^ 0 is 

S n2 n 

l{f^j 2 -ny Since the 

n 

subsets {/ ^ j2~ n }, j, n e N, belong to cr(M), we see that / is a member of K. 
Here we employed the fact that cr(M) ^ D. If / is a (M)-measurable, then we 
write / as a difference of two non-negative a (M)-measurable functions. □ 

The previous theorems (Theorem 2.42 and Theorem 2.43) are used in the fol¬ 
lowing form. Let Q be a set and let (Si, §i) ieI be a family of measurable spaces, 
indexed by an arbitrary set /. For each i e I, let M t denote a collection of 
subsets of Si, closed under finite intersection, which generates the a- field S,;, 
and let f) : A S, be a map from Q to ,5). In this context the following two 
propositions follow. 

2.44. Proposition. Let M be the collection of all sets of the form (~] ieJ ff 1 (Ai), 
Ai e M i, i e J, J <= I, J finite. Then M is a collection of subsets of Ll which is 
stable under finite intersection and <r(M) = a (fi : i e I). 

2.45. Proposition. Let K be a vector space of real-valued functions on Cl such 
that: 

(i) the constant function 1 belongs to K; 

(ii) if ( h n : n e N) is an increasing sequence of non-negative functions in K 
such that h = sup n h n is finite (bounded), then h belongs to K; 

(iii) K contains all products of the form Idiej U; ° /i, J — I, J finite, and 
Ai e Mi, i e J. 


Under these assumptions K contains all real-valued functions (bounded) func¬ 
tions in a(fi : i e /). 

The theorems 2.42 and 2.43, and the propositions 2.44 and 2.45 are called the 
monotone class theorem. 


69 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


In the propositions 2.44 and 2.45 we may take S{ = S', M* the collection of finite 
subsets of Si, and /* = X i: i e I = N. 


3. More on Brownian motion 


Further on in this book on stochastic processes we will discuss Brownian motion 
in more detail. In fact we will consider Brownian motion as a Gaussian process, 
as a Markov process, and as a martingale (which includes a discussion on Ito 
calculus). In addition Brownian motion can be viewed as weak limit of a scaled 
symmetric random walk. For this result we need a Functional Central Limit 
Theorem (FCLT) which is a generalization of the classical central limit theorem. 

2.46. Theorem (Multivariate Classical Central Limit Theorem). Let (Q,T, P) 
be a probability space, and let {Z n : neN} be a sequence of ¥-independent and 
¥-identically distributed random variables with values in W 1 in L 1 (fl, T, P; M d ). 
Let n = E[Zi], and let D be the dispersion matrix of Z\ (i.e. the variance- 
covariance of the random vector Z\). Then there exists a centered Gaussian (or 
multivariate normal) random vector X with dispersion matrix D such that the 
sequence 


Z\ + • • • + Z n — Ufa 



converges weakly (or in distribution) to a centered random vector X with dis¬ 
persion matrix D as n —> go. The latter means that lim E [/ ( Z n )] = E [/ (Z)] 

n —>00 

for all bounded continuous functions f : —» M. 

Notice that by a non-trivial density argument we only need to prove the equality 



for all functions / of the form f(x) = e ~ l ^ x ^, x e £ e M d . 

Next let us give a (formal) definition of Brownian motion. 

2.47. Definition. A one-dimensional Brownian motion with drift /i and diffu¬ 
sion coefficient a 2 is a stochastic process {X(t) : t ^ 0} with continuous sam¬ 
ple paths having independent Gaussian increments with mean and variance of 
an increment X(t + s) — X(t) given by sp = E [X(t + s) — X(f)] and sa 2 
■ [(A(t + s) — A(t)) 2 ], s, t ^ 0. If X 0 = x , then this Brownian is said to start 
at x. A Brownian motion with drift /.i = 0, and a 2 = 1 is called a standard 
Brownian motion. 

One of the problems is whether or not such a process exists. One way of resolv¬ 
ing this problem is to put the Functional Central Limit Theorem at work. Let 
us prepare for this approach. Let {Zj : j e N} be a sequence of centered inde¬ 
pendent identically distributed real valued random variables in L 1 (Q. T, P) with 
variance a 2 = E [Z 2 ] . For example these variables could be Bernoulli variables 
taking the values +a and —a with the same probability Put So = Z 0 = 0, 


70 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


S n = Zi + «• • , n e N, n ^ 1. Define for each scale parameter n ^ 1 the 
stochastic process X^ n \t) by 

c. . vL nt J 7 

X^\t) = ^ = ^ 0 . (2.113) 

V n V n 

Here [nt\ is the integer part of nt, i.e. the largest integer k for which k < nt < 
k + 1. The it is relatively easy to see that 

E [A (n) (f)] = 0, and Var (A (n) (f)) = E [X (n) (f) 2 ] = to 2 . (2.114) 

Then the classical CLT (Central Limit Theorem) implies that there exists a 
process { X(t ) : t ^ 0} with the property that for every rn e Id, for every choice 
(ti ,.... t m ) of rn positive real numbers, and every bounded continuous function 
/ : R m —> C the following limit equality holds: 

lim E [/ (X™(t))] = E [/ (X(t))] . (2.115) 

The equality in (2.115) says the finite-dimensional distributions of the sequence 
of processes \X (n) (t) : t ^ 0} n( _ N converges weakly to the finite-dimensional 
distributions of the process {X(t) : t ^ 0}. This limit should then be one¬ 
dimensional Brownian motion with drift zero and variance o 2 . A posteriori 
we know that Brownian motion should be P-almost surely continuous. How¬ 
ever the processes [X^ n >(t) : t ^ 0} neN have jumps. It would be nice if we were 
able to replace these processes which have jumps by processes without jumps. 
Therefore we employ linear interpolation. This can be done as follows. We 
introduce the following interpolating sequence of continuous processes: 

X {n \t) = ^ + (nt - [nt\) Z[n *i +1 , t > 0. (2.116) 

Jn Jn 


T ll rrui mm+1 

Let m and n be positive integers. Then on the half open interval —,- 

|_ n n 

S ~ 

the variable X n (t ) is constant in time t at level —^, while X n (t) changes linearly 
from 


S n 


at time t 


n 


TM , $712 + 1 

— to 

n Jn 


S n 


+ 


J m +1 


n 


at time t 


m + 1 


n 


n 


jx( n )(t) : 


t ^ 0 


(2.117) 

} 


72EN 


It can be proved that the sequence of stochastic processes 

converges weakly to Brownian motion with drift /i and variance <r 2 . This is 
the contents of the following FCLT (Functional Central Limit Theorem). The 
following result also goes under the name “Donsker’s invariance principle”: see, 
e.g., [15] or [42], 

2.48. Theorem (Functional Central Limit Theorem). Let^X^ n \t) : te [0,T]j, 

ne N, and {X(t) : te [0,T]} be stochastic processes possessing sample paths 
which are P -almost surely continuous with the property that the finite-dimen¬ 
sional distributions of the sequence \X^(t) : t e [0,T] 1 converge weakly to 

l J 12EN 


71 


Download free eBooks at bookboon.com 











Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


those of {X(t) : te [0,T]}. Then the sequence \x^ n \t) : te [0,Til con- 

v. J neN 

verges weakly to {X(t) : t e [0, T]} if and only if for every e > 0 the following 
equality holds: 


lim sup P 

'T 0 neN 


sup 

X {n \s) -X {n \s) 

> £ 

0^s,£^T, \s—t\^S 




= 0 . 


(2.118) 


This result is based on Prohorov’s tightness theorem and the Arzela-Ascoli 
characterization of compact subsets of C'[(l. T], 

2.49. Theorem (Prohorov theorem). Let (P n : neN) be a sequence of proba¬ 
bility measures on a separable complete metrizable topological space S with Borel 
a-field S. Then the following assertions are equivalent: 

(i) For every £ > 0 there exists a compact subset K £ of S such that 
P n [K e ] ^ 1 — e for all neN. 

(ii) Every subsequence of (P n : n e N) has a subsequence which converges 
weakly to a probability measure on (S, S). 


A sequence (P n ) n satisfying (i) (or (ii)) in Theorem 2.49 is called a Prohorov 
set. Theorem 2.48 can be proved by applying Theorem 2.49 with P n equal to 


the P-distribution of the process 


| XW(t) 


0 < t < T 


}■ 




Because achieving your dreams is your greatest challenge. IE Business School’s Master in Management taught in English, 
Spanish or bilingually, trains young high performance professionals at the beginning of their career through an innovative 
and stimulating program that will help them reach their full potential. 

Choose your area of specialization. 

Customize your master through the different options offered. 

Global Immersion Weeks in locations such as London, Silicon Valley or Shanghai. 

Because you change , we change with you . 


www.ie.edu/master-management mim.admissions@ie.edu f # In YnTube ii 


Master in Management • 


72 

Download free eBooks at bookboon.com 

















Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2.50. Theorem (Arzela-Ascoli). Endow C'[0, T] with the topology of uniform 
convergence. A subset A of C[ 0 , T] has compact closure if and only if it has the 
following properties: 

(i) sup |w( 0 )| < go; 

ujeA 

(ii) The subset A is equi- continuous in the sense that 

lim sup sup |cn(s) — w(t)| = 0 . 

NO OsSs,tCT, |s-t|«:<5 o>eA 


From (i) and (ii) it follows that sup sup |w(s)| < go, and hence A is uniformly 

ujeA se[0,T] 

bounded. The result which is relevant here reads as follows. It is the same as 
Theorem T.8.4 in Bhattacharaya and Waymire [15]. 

2.51. Theorem. Let ( P n ) n be a sequence of probability measures on (7[0,T]. 
Then ( P n ) n is tight if and only if the following two conditions hold. 


(i) For each p > 0 there is a number B such that 

P n [u e C[0,T] : |cn(0)| > B] < p, n = 1 , 2 , ... 

(ii) For each z > 0, p > 0, there is a 0 < S < 1 such that 


Pn 


ueC[0,T] : sup \u(s) — u(t)\ ^ z 

0^s,£^T, \s— 1\^5 


n = l, 2 , 


PROOF. If the sequence (P n ) n is tight, then given p > 0 there is a compact 
subset K of C ([ 0 ,T]) such that P n (K ) > 1 — p for all n. By the Arzela-Ascoli 
theorem (Theorem 2.50), if B > sup w6X |co(0)|, then 

P n [a; 6 C[0,T] : | W (0)| > B] < P n [K c ] < 1 - (1 -p)=p. 

Also given z > 0 select 5 > 0 such that sup sup |cn(s) — cj(f)| < £• Then 

ujeK \s— 1\^5 


w e C[0, T] : sup \u(s) — u(t)\ ^ s 

0 \s—t\^S 


< P n [.A c ] < p for all n ^ 1. 


The converse goes as follows. Given p > 0, first select B using (i) such that 
P n \uj e C ([0,T]) : |co(0)| < B] ^ 1 — ^p, for n 5 = 1 . Select <5 r > 0 using (ii) 


such that 
Pr 


uj e C ([0, T]) : sup 

0^s,£^T, \s—t\^S 


IjO(s) — U;(t)| < 


5= 1 — 2 (r+ 1 i ?7 for n 5= 1 . 


Now take K to be the uniform closure of 

OO f 

f~l < uj e C ([0, T]) : |cn(0)| < B, sup 

r= l I 0^s,t^T,\s—t\^S 


u(s)-u(t )I < A . 


Then P n (K) > 1 — 77 for n ^ 1 , and K is compact by the Arzela-Ascoli theorem. 
This completes the proof Theorem 2.51. □ 


73 


Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


For convenience of the reader we formulate some limit theorems which are rel¬ 
evant in the main text of this book. The formulations are taken from Stirzaker 
[ 126 ]. For proofs the reader is also referred to Stirzaker. For convenience 
we also insert proofs which are based on Birkhoff’s ergodic theorem. Define 
Sn = s: ~o Xk, where the variables \Xk} keN a L 1 (Q, IF, P) are independent and 
identically distributed (i.i.d.). Then we have the following three classic results. 


2.52. Theorem (Central limit theorem, standard version). If E [X^] = y and 
0 < var (X^ = a 2 < co, then 


lim P 

71—>00 


S n nfi 

(ncr 2 ) 1 / 2 


< x 


= <F(a;), 


where 'F(x) is the standard normal distribution, i.e. <h(x) = 


\Z2tt J_ 


r 


e"^ 2 dx. 


PROOF. Let / : M —» C be a bounded (7 2 -function with a bounded second 
derivative. Then by Taylor’s formula (or by integration by parts) we have 

f(y) = /(0) + yf'(0) + iy 2 /"(0) + J (1 -s)y 2 {f'(sy) - f"(0)} ds. (2.119) 

Put Y n k = - ;4 ‘ ■ Inserting y = Y n k into (2.119) yields 

’ (ryn 


/ (UU = /(o) + Y n , k f( 0) + ir„y"( 0) + £ (1 - s) Yl k {/" (sY„, k ) - /"( 0)( ds. 

( 2 . 120 ) 

Then we take expectations in (2.120) to obtain: 

[/ (Y„, t )] = m + ^/"(0) + £ (1 - s) E [y„ 2 it {/" (sY„, k ) - /"(«)}] ds. 

( 2 . 121 ) 


E 

Put 


M = n f(l - s)E K, (1 - 

Jo 


L )] ds, t 


and choose f(y) = e~ lty . Observe that, uniformly in t on compact subsets of 
M, lim n ^oo£ n (t) = 0. Then, since the variables Y U: k, 1 < k < n, are i.i.d., from 
(2.121) we get 

E \e ~ itY ».*1 = 1 - — + t ^l . 

From (2.122) we infer 


( 2 . 122 ) 


E 


= Efc=i Li. k 




(2.123) 


Let Y : Q —*• M be a standard normally distributed random variable. From the 
properties of the sequence [s n (t)} n and (2.123) we see that, for every 0 < R < co, 


lim sup IE 

n ^°° \t\*ZR ( 


e 


■*!£= l Y n, k - e -i* Y 


} 


74 


Download free eBooks at bookboon.com 













Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


= lim sup \ 

;e 

1- 

J 

w 

HO 

7 

03 

n ^°° \t\^R 1 



= lim sup \ 

n ^°° \t\^R 1 

Je 

g—it 2fc = l ^n,k 




n 



0 . 


(2.124) 


The conclusion in Theorem 2.52 then follows from (2.124) together with Levy’s 
continuity theorem: see Theorem 5.42, and Theorem 5.43 assertions (9) and 
( 10 ), □ 


2.53. Theorem (Weak law of large numbers). If E [W] = /a < co, then for all 

£ > 0 , 


lim P 

71 —► 00 


Sn 


-h 

n 

> s 


0 . 


For a proof of the following theorem see (the proof of) Theorem 5.60. It is 
proved as a consequence of the (pointwise) ergodic theorem of Birkhoff: see 
Theorems 5.59 and 5.66, and Corollary 5.67. 


"I studied 
English for 16 
years but... 

...I finally 
learned to 
speak it in just 


n 


six lessons 

Jane, Chinese architect 




ENGLISH 


OUT THERE 



Click to hear me talking 
before and after my 
unique course download 


Click on the ad to read more 


Download free eBooks at bookboon.com 

















Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2.54. Theorem (Strong law of large numbers). The equality 

g 

lim — = /i, holds F-almost surely (2.125) 

n—>co n 

for some finite constant //,, if and only i/E[|Ah|] < co, and then fi = E[Ah]. 
Moreover, the limit in (2.125) also exists in L 1 -sense. 


We will show that Theorem 2.53 is a consequence of Theorem 2.54. 


Proof of Theorem 2.53. Let £ > 0 be arbitrary, and let {X k } k and /i 
be as in Theorem 2.53. Then 


P 


S n n/i 

n 





n 


(2.126) 


By the Id-version of Theorem 2.54 it follows that the right-had side of (2.126) 
converges to 0. This shows that Theorem 2.53 is a consequence of Theorem 


2.54. 


□ 


The central limit theorem is the principal reason for the appearance of the nor¬ 
mal (or “bell-shaped”) distribution in so many statistical and scientific contexts. 
The first version of this theorem was proved by Abraham de Moivre before 1733. 
The laws of large numbers supply a solid foundation for our faith in the useful¬ 
ness and good behavior of averages. In particular, as we have remarked above, 
they support one of our most appealing interpretations of probability as long¬ 
term relative frequency. The first version of the weak law was proved by James 
Bernoulli around 1700; and the first form of the strong law by Emile Borel in 
1909. We include proofs of these results in the form as stated. As noted above 
a proof of Theorem 2.54 will be based on Birkhoff’s ergodic theorem. 

2.55. Remark. The following papers and books give information about the 
central limit theorem in the context of Stein’s method which stems from Stein 
[124]: see Barbour and Hall [9], Barbour and Chen [8], Chen, Goldstein and 
Shao [31], Nourdin and Peccati [101], Berckmoes et al [13]. This is a very inter¬ 
esting and elegant method to prove convergence and give estimates for partial 
sums of so-called standard triangular arrays (STA). It yields sharp estimates: 
see the forthcoming paper [14], 


4. Gaussian vectors. 

The following theorem gives a definition of a Gaussian (or a multivariate nor¬ 
mally distributed) vector purely in terms of its characteristic function (Fourier 
transform of its distribution. 

2.56. Theorem. Let (f2,3 r , P) be a probability space, and let X = (Ah, ..., X n ) 
be an E n -valued Gaussian vector in the sense that there exists a vector p := 
(/ii,..., n n ) e E n and a symmetric square matrix a := (c r yfc)” fc=1 such that the 
characteristic function of the vector X is given by 

E [e“* <€ ’ x) ] = for all £ = (&,..., f n ) e R n . (2.127) 


76 


Download free eBooks at bookboon.com 











Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


Then for every 1 ^ j ^ n the variable Xj belongs to L 2 (fl, IT, P), /j 7 = E [Xj\, 
and 

(2.128) 


<Tj,k = cov (Xj, X k ) = E [(Xj - E [X,]) (X k - E [. X k ])]. 


Proof. Put Y = X — /i, and fix e > 0. Then the equality in (2.127) is 
equivalent to 


E [e = e 2 £i£k a j,k f or a n £ = (£ 1? ... 5 £ n ) 


(2.129) 


From Cauchy’s theorem and the equality 


e-^in 2 = 


2lT£) n Jr 


o-dvT) P -h'"' 2 


( \Z2ne ) 

From (2.129) and (2.130) we infer: 
E 


e 2 s |??l dr/ 


I r® _i 
^j-oo 6 2 

srl 


2?? dr/ = 1 we obtain 




(V^) 


e 2,7,1 dr}. 
(2.130) 


~ p -i{£.,Y) p -\e\Y\ 2 ~ 

- 1 f E 

~ -i<£+V^T>~ 


(V 2 ^) n Jr- 

O 


e 2 1^1 dr} 


(employ (2.129) with £ + yejy instead of f ) 

(V^Tl) JM n 

(2.131) 

Next we take 1 ^ l\, (-2 ^ n, and we differentiate the right-hand side and 
left-hand side of (2.131) with respect to and the result with respect ly, • In 
addition we write a negative sign in front of this. Then we obtain: 


E 


e-^Yt^e-tW 

1 


f e-2 

J]R n 


(V2n y 

a £iA + a hA 


~2'm,k=i(ij+Y^Vj)(A+\^VkYj,k p -2\v\ 


X fe + V^Vj) 

0=1 


a jA + a h,j 


dg. 


(2.132) 


Inserting f = 0 into (2.132) yields: 


E 


FqF, 2 e-^l y l 2 


Jr 


(V2ny 

a h/2 + a hA 


o-h £ Tv,k=i r ljrik‘Tj,k p -h\ 1 l \ 2 


— £ 


0=1 


a jA + a h,j 


dr}. (2.133) 


First assume that t\ = £2 = C Then the left-hand side of (2.133) increases to 

E [Y 2 ], and the right-hand side increases to _ n e — 2 1 7 ?! dr/ oyr = cqy if £ 

(V27t) jRn 

decreases to zero. Consequently, Y^ e L 2 (Q, T, P) and E [Y) 2 ] = cqy, 1 < £ ^ n. 


77 


Download free eBooks at bookboon.com 























Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


It follows that Yt belongs to L 1 (12, T. P), and that we also have that Yf L Y( 2 e 
L 1 (12, T. P). By applying the same procedure as above we also obtain that 


E [Y h Y h ] = ^ + 2 ^ (2.134) 

In (2.134) we employed the symmetry of the matrix ( 04 ^)™ ^ v Again we fix 
1 < I < n, and we differentiate the equality in (2.131) with respect to & to 
obtain 


iE 


' e -i(ix) Yie -W? 

7 J-xn f dr] e~^l k =^ + Y~evj) + e ~h\ 

(v 2n) jRn 


2 fe + V^Vj) 

3 = 1 


&j,e + G i. 


(2.135) 


In (2.135) we set £ = 0, and we let e J, 0 to obtain E \Y(\ = 0, and hence Xt e 
L 1 (Q, ff, P) and E [A/] = ]_i(. This completes the proof of Theorem 2.56. □ 


5. Radon-Nikodym Theorem 

We begin by formulating a convenient version of Radon-Nikodym’s theorem. 
For a proof the reader is referred to Bauer [10] or Stroock [130]. 

2.57. Theorem (Radon-Nikodym theorem). Let (O, T, /x) be a a-finite measure 
space, and let u be a finite measure on T. Suppose that v is absolute continuous 
relative to //,. i.e. ji(B) = 0 implies u(B) = 0. Then there exists a function 
f e L 1 (12, T. p) such that v(B) = / dp for all B e T. In particular the 
function f is T-measurable. 

The following corollary follows from Theorem 2.57 by taking T = “B, p the 
measure P confined to 23, and v(B) = E [X, B\, B e 23. 

2.58. Corollary. Let (12, A, P) be a probability space, and let 23 be a subfield 
(i.e. a sub-a-field) of A. Let X be a stochastic variable in L 1 (12, A, P). Then 
there exists a 23 -measurable variable Z on 12 with the following properties: 

(1) (qualitative property) the variable Z is 23 -measurable; 

(2) (quantitative property) for every Be® the equality E \Z, B] = E [X, B] 
holds. 

The variable Z is called the conditional expectation of X, and is denoted by 
Z = E [A J 23]. The existence is guaranteed by the Radon-Nikodym theorem. 

6. Some martingales 

Let FI be a locally compact Hausdorff space which is second countable, and let 
{(12, T, F x ) , (A (t),t > 0), (d t , t > 0), (E, £)} (2.136) 


78 


Download free eBooks at bookboon.com 







Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


be a time-homogeneous strong Markov process with right-continuous paths, 
which also have left limits in the state space E on their life time. Put S(t)f(x ) = 
E x [/ (X(£))], / e Cq(E), and assume that S(t)f e Cq(E) whenever f e Cq(E). 
Here a real or complex valued function / belongs to Cq{E) provided that it is 
continuous and that for every £ > 0 the subset {x <e E : \f(x)\ > e} is compact 
in E. Let the operator L be the generator of this process. This means that 
its domain D(L ) consists of those functions / e Cq{E ) for which the limit 

S(t)f -f 


Lf = lim 

t|0 


t 


exists in G'o( E ) equipped with the supremum norm, i.e. 


Il/H^ = sup xs£; \ f{x)\, f 6 Cq(E). The Markov property of the process in (2.136) 
together with the right continuity of paths implies that the family {S(t) : t ^ 0 } 
is a Feller, or, more properly, a Feller-Dynkin semigroup. 


( 1 ) The semigroup property can be expressed as follows: 

S {ti + t 2 ) = S (ti) S (t 2 ), ti, t 2 ^ 0, 5(0) = /. 

(2) Moreover, the right-continuity of paths implies 

lim S(t)f(x) - BmE, [/ (V(i))] - E* [/ (X (0))] - f(x), f e Co(E). 

(3) In addition, if 0 < / < 1 , then 0 < S(t)f < 1 . 

A semigroup with the properties ( 1 ), ( 2 ) and (3) is a called a Feller, or Feller- 
Dynkin semigroup. In fact, it can be proved that a Feller-Dynkin semigroup 
{S(t) : t ^ 0} satisfies 

iim ||5(s)/-5(t)/|| oo = 0 , t> 0, / 6 Cq(E). 

s —s>0 

Let {S(t) : t ^ 0} be a Feller-Dynkin semigroup. Then it can be shown that 
there a exists a Markov process, as in (2.136) with right-continuous paths such 
that S(t)f(x) = E* [/(X(£))], / e Cq(E), t ^ 0. For details, see Blumenthal 
and Getoor [ 20 ]. Similar results are true for states spaces which are Polish; see, 
e.g., [146]. 

Let t > M(t), t ^ 0 , be an adapted right-continuous multiplicative process, 
i.e. M( 0) = 1 and M(s)M(t ) o = M(s + t), s, t ^ 0. Put 5jw(f)/(x) = 
E x [M(t) f (X(t))], / g Cq(E), t ^ 0 . Assume that the operators Sm( t) leave 
the space Cq(E) invariant, so that 5 m( f)/ belongs to Cq{E ) whenever / g 
Cq{E). Then the family {Sm{ t) : t ^ 0} has the semigroup property 5 m(5 + 1) = 
S M (s)S M (t), s,t> 0 , and lim ao S M (t)f(x) = f(x), t ^ 0 , / g C 0 (E). If, in ad¬ 
dition, for every / g Cq(E) there exists a S > 0 such that sup 0 ^ t ^ 5 ||5m (^)/|| 00 < 
co, then 

ton II S M (t)f ~ f loo = 0, / g C 0 (E). (2.137) 

Moreover, there a exists a closed densely defined linear operator Lm such that 

L M f = C 0 (E)- lim - 1 (2.138) 

for f e D ( Lm ), the domain of Lm • If M(t ) = 1 , then Lm = L. 


79 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2.59. Proposition. The following processes are ¥ x -martingales: 

t ~ M(t)f (X(t)) - M(0)f(X(0)) - f M(s)L M f (X(s) ds ), 

Jo 

t> o, feD{L M ), (2.139) 

s - M(s)E x{s) [M(t - s)f (X(t - s))] , O^s^t, feC 0 (E), (2.140) 

s i-^>-M(s)Ex( s )[M(t — s — u)p(u,X(t — s — u),y)], 0 ^ s ^ t — u. (2.141) 

In (2.141) It is assumed that there exists a “reference” measure m on the Borel 
field £ together with an density function p(t,x,y), ( t,x,y ) e (0, go ) x E x E 
such that E x [f (X(t))] = §p(t,x,y) f(y) dm(y) for all f e Co(E ) and for all 
x e E and all t > 0. From the semigroup property it follows that p(s + t, x, y) = 
,x, z)p(t,z,y)dm(z ) for m-almost all y e E. Assuming that m(0 ) > 0 
for all non-empty open subsets of E, and that the function (t, x, y ) > p(t, x, y) 

is continuous on (0, 00 ) x E x E, it follows that the equality p(s + t, x, y) = 
$p(s , x, z)p(t, z, y)dm(z) holds for s, t > 0 and for all x, y e E. 



AACSB 


ACCREDITED 


Excellent Economics and Business programmes 

'\&r 


university of 
groningen 




www.rug.nl/feb/education 


“The perfect start 
of a successful, 
international career.” 

CLICK HERE 

to discover why both socially 
and academically the University 
of Groningen is one of the best 

places for a student to be 


80 



Download free eBooks at bookboon.com 



Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


The following corollary is the same as Proposition 2.59 with M = 1. 

2.60. Corollary. The following processes are P x -martingales: 

t - f (*(£)) - f(X( 0)) - f Lf (X(s) ds), t> 0, feD(L), (2.142) 

Jo 

s - Ej (s) [f (X(t - s))], 0 < s < t, f e C 0 (E), (2.143) 

s i—> p (£ — s, X(s), y), 0 < s < t. (2.144) 

Like in Proposition 2.59 in (2.144) it is assumed that there exists a “reference” 
strictly positive Borel measure m such that for a (unique) continuous density 
function p(t,x,y) the identity E x [f (X(t))] = §p(t,x,y) f(y)dm(y) holds for 
all f e Co(E ) and for all x e E and all t > 0. 

2.61. Lemma. Let the continuous density be as in Proposition 2.59, and let 
z e E. Then the following equality holds for all 0 ^ s < t and for all y e E: 

E z (p{t - s,X(s),y)] = p(t, x, y). (2.145) 

Proof of Lemma 2.61. Let the notation be as in Lemma 2.61. Then by 
the identity of Chapman-Kolmogorov we have 

E ; z [p(t - s,X(s),y)\ = J p(s,z,w)p(t - s,w,y)dm(w) =p(t,x,y). (2.146) 

The equality in (2.146) is the same as the one in (2.145), which completes the 
proof of Lemma 2.61. □ 


Proof of Proposition 2.59. First let / belong to the domain of L M , 
and let £2 > C ^ 0. Then we have 

E x \m (t 2 ) f (X (t 2 )) - M(0)/ (X(0)) - f M(s)L M f {X{s)) ds \ T tl 

Jo 

- M (P) f [X (t 2 )) + M(0)f (A"(())) + f 1 M(s)L M f (X(s)) ds 

Jo 

= E x M (£,) (t 2 - h) f (X (t 2 - h)) - M(0)f (X(0)) 

- J Q <2 ^ M(s)L m f{X(s)) ds^j od tl | T tl 
(Markov property) 


— M (£1) Expp 


M (£ 2 - ti) f (X (£ 2 - ti)) - M(0)/ (X(0)) 

\ 2 11 M(s)L M f(X(s)) ds 
Jo 

(definition of the operator 5 m(£); put z = X (£ 1 ), and t = t 2 — t±) 

= M (£,) (5 m (t) f(z) - E z [M(0)/(X(0))] - £ S M (s)L M fiz) ds 


81 


Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


= M (h) (s M 0 1 ) f(z) - E z [M(0)f (X(0))] - j- s S M (s)f(z ) ds^j 

= M (h) ( S M (t) f(z) - E z [M(0)f (X(0))j - S M (t)f(z) + S M (0)f(z)) = 0. 

(2.147) 

The equalities in (2.147) show the equality in (2.139). 

Let / 6 Cq(E) and t > 0. In order to show that the process 
3 - M(s)E x(a) [M(t - s)f (X(t - s))] 
is a P ;1 ,-inartingalc we proceed as follows: 

M(s)E x(s) [M(t-s)f(X(t-s))] 

(Markov property) 

= M(s)E x [M(t - s) o d s f (X(t - s)) o $ s | 3F S ] 

= E x [M(s)M(t - s ) o d s f (X(t)) | T s ] = E x [M(t)f (X(t)) | T s ] . (2.148) 

It is clear that the process in (2.148) is a martingale. This proves that the 
process in (2.140) is a martingale. A similar argument shows the equality: 

M(s)Ex(s) [M(t - s - u)p (u, X(t - s - u), y)] 

= E x [M (t — u)p (u, X(t — u),y) | T s ] , 0 < s < t — u. (2.149) 

Again it is clear that the process in (2.149) as a function of s is a P^-martingale. 
Altogether this proves Proposition 2.59. □ 

Proof of Corollary 2.60. The fact that the processes in (2.142) and 
(2.143) are P^-martingales is an immediate consequence of (2.139) and (2.140) 
respectively by inserting M(p) = 1 for all 0 < p < t. If M(p) = 1 for all 
0 < p <t, then by Lemma 2.61 we get 

M (s)Ex( s ) [M(t - s - u)p (u, X(t - s - u ), yj\ 

= Ex( s ) [p(u,X(t - s - u),y )] =p(t- s,X(s),y) . (2.150) 

On the other hand by the Markov property we also have: 

Ex( a ) [P (u,X(t - s - u), y )] = E x [p (u,X(t-u),y) | Tj . (2.151) 

As a consequence of (2.150) and (2.151) we see that the process in (2.144) is a 
martingale. This completes the proof of Corollary 2.60. □ 

2.62. Remark. In general the process s >—► p (t — s, X(s), y), 0 < s < t, is not a 
closed martingale. In many concrete examples we have lim p(t — s,X(s),y) = 

s]t,s<t 

0, P x -almost surely, on the one hand, and E x [p(t — s,X(s),r/)] = p(t,x,y ) > 
0 on the other. For an example of this situation take d-dimensional Brow¬ 
nian motion. By Scheffe’s theorem it follows that the P ; ,-martiiigale s i—> 
p{t — s,X(s),y), 0 < s < t, can not be a closed martingale. If it were, then 
there would exist an T r mcasurable variable F(t) = lim s -| ~ :S<t p (t — s, X(s),y) 
with the property that pit — s,X(s),y) = E x \_F{t) | T s ]. Since F(t) = 0, P x - 
almost surely, this is a contradiction. 


82 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


In the following corollary we consider a special multiplicative process: M(s) = 
1{t>s}> where T is a terminal stopping time, i.e. T = s + T o d s , P^-almost 
surely, on the event {T > s} for all s > 0 and for all x e E. 

2.63. Corollary. The following processes are P x -martingales: 

t - 1 {T>t} f (X(t)) - l {T>o} f(X(0)) - f AT Luf (X(s) ds ), 

Jo 

t> o, feD(L M ), (2.152) 

s ^ 1{t> s }E X ( s ) [f (X(t - s)), T > t - s], 0 < s < t, f e C 0 (E), (2.153) 

s ^ 1 {T>S} (p(t - s,X(s),y) - E x(s) [p(t-s- T,X(T),y ), T < t - s]) , 

0 ^ s < t. (2.154) 

In (2.141) it is assumed that there exists a “reference” measure m on the Borel 
field £ together with an density function p{t , x, y), (t , x, y) e (0, go) x E x E such 
that Ea, [/ (X(f))] = J p (t, x, y) f(y) dm(y) for all f e C${E) and for all x e E 
and all t > 0. 



American online 

LIGS University 

is currently enrolling in the 
Interactive Online BBA, MBA, MSc, 
DBA and PhD programs: 


► enroll by September 30th, 2014 and 

► save up to 16% on the tuition! 

► pay in 10 installments/2 years 

► Interactive Online education 

► visit www.ligsuniversity.com to 
find out more! 


Note: LIGS University is not accredited by 
nationally recognized accrediting agency 
by the US Secretary of Education. 

More info here. 



Download free eBooks at bookboon.com 









Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


It is noticed that the definition of L M f(x ) is only defined pointwise, and that 
for certain points x e E the limit 

L M f(x).- lim- 1 - 

does not even exist. A good example is obtained by taking for T the exit 
time from an open subset U: T = Tu = inf {s > 0 : X(s) e E\U}. If the 

ip \t < t\ 

lim —-—--= 0 for all x e U, then Lm/(x) = Lf(x) for x e U. 

Proof of Corollary 2.63. It is only (2.154) which needs some explana¬ 
tion; the others are direct consequences of Proposition 2.59. To this end we fix 
0 < u <t. Then by (2.141) the process: 

s h-> l {r>s} Ex( s ) \p(u,X(t - s - u),y ), T > t - s - u] 

is a martingale on the closed interval [0 ,t — u \. Next we rewrite 

Ex-(a) [p (u, X(t - s - u),y), T > t - s - u\ 

= E X (s) [p(u,X(t - s- u),y )] -E x(s) \p{u,X{t - s - u),y ), T < t - s - u] 

(2.155) 

(the process p >—> p(t — s — p,X(p),y) is P z -martingale with z = X(s); put 
u = t — s in the first term, and u = t — s — T in the second term of the 
right-hand side of (2.155)) 

= E X ( S ) [p(t - s,X(0),y)] - Ex(,) [p(t- s- T,X(T),y ), T ^ t - s - u]. 

(2.156) 

By letting u { 0 in (2.156) and using (2.141) of Proposition 2.59 we obtain that 
the process in (2.154) is a P ; ,.-martingale. This completes the proof of Corollary 
2.63. □ 

Next let 

{(a T, P x ), > 0 ), (0 t ,t > 0 ), (R d , S Rd )} 

be the Markov process of Brownian motion. Another application of martingale 
theory is the following example. Let U be an open subset of W ! with smooth 
enough boundary dU {C l will do), and let / : dU M be a bounded continuous 
function on the boundary dU of U. Let a : U M be a continuous function 
such that u(x) = f(x) for x e dU and such that A u{x) = 0 for x e U. Let tu 
be the first exit time from U: Tu = inf |.s > 0 : B(s) e Then 

u(x) = E x [f (B (■ t v )) : tjj < go] + lim E* [a (B (£)) : t v = oo]. (2.157) 

00 

Notice that the first expression in (2.157) makes sense, because it can be proved 
that Brownian motion is P x -almost surely continuous for x e W l . The proof 
uses the following facts: stopped martingales are again martingales, and the 
processes 

t - / (B(t)) - f (5(0)) - \ J A/ (5(s)) ds, f e C b (R d ), A/ e C b (M d ) , 

(2.158) 


84 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


are martingales. The fact that a process of the form (2.158) is a martingale 
follows from (2.142) in Corollary 2.60. It can also be proved using the equality 


j t Pd (t, X , y) = ^\p d (t, X , y) 


(2.159) 


where 


Pd ( t,x,y ) = 


1 


(2tt t)° 


I x-y \ 2 

-e 2t 


A proof of (2.158) runs as follows. Pick t 2 > h > 0 en a function / e C& (R d ), 
such that A/ also belongs to C'i, (M d ). Then we have: 


E, 


/ (B (t 2 )) - f (B (0)) - i J ’ A/ (B(s)) ds | 3-, 

- / (B (ti)) + / (B (0)) + 1A/ (B(s)) * 

(/ (B (*2 - ti)) - / (B (0)) - i “ A/ (B(s)) 


= E, 


(Markov property of Brownian motion) 


ds oA % 


E 


Bit i) 


/ (B ih ~ ti)) - / (B (0)) - i P “ A/ (B(s)) 


ds 


(put z = B (t\), and t = f 2 — ti) 

= E z [/ (5(0)] - E z [/ (5(0))] - 1 £ E z [A/ (5(s))] ds 

= E z [/ (5(0)] - E z [/ (5(0))] -If f p d (s, z, y) A/ (y) dy ds 

Z Jo J]R d 

= E 2 [/ (B(t))] - Ej [/ (B(0))] - lim iff p d (s, z, if) A/ (») dy 

£ 1° 2 J £ J R d 

(integration by parts) 

= E, [/ (B(i))] - Ej [/ (B( 0 ))] - lim f f i A, Pi (s, z, y) / (y) dy is 

£ 1° Je Jr<* 2 

(use the equality in (2.159)) 


= E z [/ (5(t))] - E z [/ (5(0))] - lim f f ^ Pd (s, z, y) f (y) dy ds 

£ 1° Je jRd dS 

(interchange integration and differentiation) 


= E 2 [/ (5(0)] - E z [/ (5(0))] - lim f f p d (s, 2 , y) f (y) dy ds 

•" J £ ds J Rd 

(fundamental rule of calculus) 

= E s [/(B(t))]-EJ/(B(0))] 


85 


Download free eBooks at bookboon.com 











Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


-lip ( Pd (t, z, y) f (y) dy- p d (e, z, y) f (y) dy 
= E z [f (B(t))] - E z [f (5(0))] - E, [f (B(t))] + limE, [f (5(e))] 

e|0 

-limE 2 [/(B( £ ))]-E,[/(B(0))]=0. 


(2.160) 


From Doob’s optional sampling theorem it follows that processes of the form 


t~f(B(TuAt))-f(B(0))-^ T J At Af(B(s))ds, feC b (R d ), (2.161) 

/ e C b (M d ), A/ 6 C b (M d ), are P x -martingales for x e U. We can apply this 
property to our harmonic function u. It follows that the process 

t h-> u(B (ju a t)) — u (5(0)) — A u (5(s)) ds = u(B (tjj a t)) — u (5(0)) 

2 J 0 

(2.162) 


is a martingale. Consequently, from (2.162) we get 
u(x) = u (5(0)) = E x [u (5 (t v a t))] 

= E x [u (5 (txj a t )), Tu t] + E x [u (5 (t v a t)), t v > t] 

= E x [u (5 (tu)) , t v < t\ + E x [u (5(f)), t v > t] (2.163) 

In (2.163) we let t —*• oo to obtain the equality in (2.157). 



A cate-Lucent 


www.alcatel-lucent.com/careers 


What if 
you could 
build your 
future and 
create the 
future? 


One generation’s transformation is the next’s status quo. 
In the near future, people may soon think it’s strange that 
devices ever had to be “plugged in.” To obtain that status, there 

needs to be “The Shift". 



Download free eBooks at bookboon.com 












Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


2.64. Proposition. Let t > Mi(t ) and t > M 2 (t) be two continuous martin¬ 
gales in L 2 (12, T, P) with covariation process t > (Mi, M 2 ) (f), so that in partic¬ 
ular the process t •—» Mi(t)M 2 (t) — (Mi, M 2 ) (t) is a martingale in L 1 (fi, T, P). 
T/iera t/ie process 


t ~ (Mi(t) - Mi(s)) (M 2 (t) - M 2 (s )) - (Mj, M 2 ) (f) + (Mi, M 2 ) (s), 
is a martingale. 


t ^ s, 
(2.164) 


In fact by Ito calculus we have the following integration by parts formula: 


(M 1 (f)-M 1 (s))(M 2 (f)-M 2 (s)) 

= f (Mi(p) - Mi(s)) dM 2 (p) + f (M 2 (p) - M 2 (s)) dMi(p) 

J S J S 

+ (M\, M 2 ) (f) — (Mi, M 2 ) (s), t ^ s. (2.165) 


Proof of Proposition 2.64. Fix t 2 > t x ^ s. Then we calculate: 

E [(Mj (f 2 ) - Mi (s)) (M 2 (t 2 ) - M 2 (s) | T tl )] 

- E [(Mi, M 2 ) ( t 2 ) - (Mi, M 2 ) (s) | T tl ] 

- (Mi (ti) - Mi (s)) (M 2 (ti) - M 2 (s)) + (Mi, M 2 ) (ti) - (Mi, M 2 ) (s) 

= E [(Mi (t 2 ) - Mi (s)) (M 2 (t 2 ) - M 2 (s)) | T tl ] 

- E [(M l5 M 2 ) (t 2 ) - (M 1} M 2 ) (ti) | T tl ] 

- (Mi (h) - Mi (s)) (M 2 (ti) - M 2 (s)) 

= E [(Mi (t 2 ) - Mi (ti) + Mi (ti) - Mi (s)) 

(M 2 (t 2 ) - M 2 (ti) + M 2 (h) - M 2 (s)) 

-(Mi,M 2 ) (t 2 ) + (Mi, M 2 ) (ti) |T tl ] 

- (Mi (ti) - Mi (s)) (M 2 (ti) - M 2 (s)) 

= E [(Mi (t 2 ) - Mi (ti)) (M 2 (t 2 ) - M 2 (h)) | T tl ] 

- E [(Mi, M 2 ) (t 2 ) + (Mi, M 2 ) (ti) | J h ] 

+ E [(Mi (h) - Mi (s)) (M 2 (t 2 ) - M 2 (h)) | T tl ] 

+ E [(Mi (t 2 ) - Mi (ti)) (M 2 (ti) - M 2 (s)) j J tL ] 

+ E [(Mi (h) - Mi (s)) (M 2 (ti) - M 2 (s)) | T tl ] 

- (Mi (ti) - Mi (s)) (M 2 (ti) - M 2 (s)) 

= E [Mi (t 2 ) M 2 (t 2 ) - (Mi, M 2 ) (t 2 ) + (Mi, M 2 ) (ti) - Mi (ti) M 2 (ti) \ T tl ] 
-E [Mi(s) (M 2 (t 2 )-M 2 (ti)) \? tl ] 

-E[(Mi(t 2 )-Mi(ti))M 2 (s)\% 1 ] 

+ E [(Mi (h) - Mi (s)) (M 2 (ti) - M 2 (s)) | T tl ] 

- (Mi (h) - Mi (s)) (M 2 (h) - M 2 (s)) = 0. (2.166) 


87 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Renewal theory and Markov chains 


In the final step of (2.166) we employed the martingale property of the following 
processes: 


t i— ► Mi(t)M 2 (t) — (Mi, M 2 ) (i), 1 1— ► and 1 1—> M 2 (t). 

This completes the proof of Proposition 2.64. 


□ 





In the past four years we have drilled 

* 


81,000 km 

A 


That's more than twice around the world. 



Whn am wp? fHSHHHH 


P 

We are the world's leading oilfield services company. Working 1 


globally—often in remote and challenging locations—we invent, 
design, engineer, manufacture, apply, and maintain technology 
to help customers find and produce oil and gas safely. 



Who are we looking for? 

We offer countless opportunities in the following domains: 

■ Engineering, Research, and Operations ^ 

■ Geoscience and Petrotechnical 

■ Commercial and Business 

A ^ 


If you are a self-motivated graduate looking for a dynamic career, 
apply to join our team. 

What will you be? 

careers.slb.com 

Schlumberger 


88 



Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


CHAPTER 3 


An introduction to stochastic processes: Brownian 
motion, Gaussian processes and martingales 


In this chapter of the book we will study several aspects of Brownian motion: 
Brownian motion as a Gaussian process, Brownian motion as a Markov process, 
Brownian motion as a martingale. It also includes a discussion on stochastic 
integrals and Ito’s formula. 


1. Gaussian processes 


We begin with an important extension theorem of Kolmogorov, which enables 
us to construct stochastic processes like Gaussian processes, Levy processes, 
Poisson processes and others. It is also useful for the construction of Markov 
processes. In Theorem 3.1 the symbol Qj, J Q I, stands for the product space 
^J = YljeJ Qj endowed with the product a- field ‘Jj. By saying that the system 
{(Clj, Tj, Pj) : J c /, J finite} is a projective system (or a consistent system, 
or a cylindrical measure) we mean that 


[p& 


A] = Pj, 


(di) W) 




where A e T/ 2 , and where ^ J\ ^ /, -h finite. The mapping p J fi , J 2 Q Ji, is 
defined by p J fi 2 (cOj) jeJi = ( UJ j)j e j 2 • practice this means that in order to prove 
that the system {(Gj, Tj,Pj) : J <= /, J finite} is a projective system indeed, 
we have to show an equality of the form (j 0 ^ J): 


[B x %„] = Pj [5], B e Jj. 

The following proposition says that under certain conditions a cylindrical mea¬ 
sure in fact is a genuine measure. 


3.1. Theorem (Extension theorem of Kolmogorov). Let 

{(nj,Tj,Pj) : Jc/, J finite} 

be a projective system of probability spaces (or distributions). Suppose that each 
0, is a metrizable and a-compact Hausdorff space endowed with its Borel field 
Ai. Then there exists a unique probability measure Pj on (f h,Ai), such that 

P/ \Pj e A] = P 7 (pj\A)) = Pj(A) (3.1) 

for every J c= / ; J finite, and for every A e Aj. 


For an extensive discussion on Kolmogorov’s extension theorem see, e.g., the 
Probability Theory lecture notes of B. Driver [40]. These lecture notes include 


89 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


a discussion on standard Borel spaces and on Polish spaces. The Kolmogorov’s 
extension theorem is also valid if the spaces 0* are Polish spaces, or Souslin 
spaces which are continuous images of Polish spaces. For more details see Ap¬ 
pendix 17.6 in [40]. The reader may also consult [21] or [137]. In Theorem 
7.4.3 of [21] the author shows that finite positive measures on Souslin spaces 
are regular and concentrated on a-compact subsets. Bogachev’s book contains 
lots of information on Souslin spaces. In fact much material which is presented 
in this book, can also be found in the lecture notes by Bruce Driver. A proof of 
Kolmogorov’s extension theorem is supplied in Section 4 of Chapter 5: see (the 
proof of) Theorem 5.81. 

Next we recall Bochner’s theorem. 

3 .2. Theorem. (Bochner) Let : M n —> C be a continuous complex function, 
that is positive definite in the sense that for all r eN 

r 

^ A fc A,<p (£*-£') >0, (3.2) 

k,£= 1 

for all Ai,..., A r e C and for all ,... ,£f e R n . Then there exists a unique 
non-negative Borel measure ji on R n such that its Fourier transform 

JexpH «,.))*.(*) 

is equal to <p(£) for In particular p(M n ) = </?(0). 

3.3. Example. Let, for every i e I, Pj, i e I, be a probability measures on Q, ( 
and define Pj on flj, J <= /, J finite, by Pj(A) = P^ ® • • -®Pj n (A), where A be¬ 
longs to Aj and where J = (j i,... ,j n ). Then the family {P j : Jcl, J finite} 
is a consistent system or cylindrical measure. 

3.4. EXAMPLE. Let cr:/x/—>-Mbea symmetric (i.e. cr(i,j) = cr(j, i) for all i, 
j in I) function such that for every finite subset J = (j i,..., j n ) of I the matrix 
( a (h J))i jeJ 7S positive-definite in the sense that 

0, (3.3) 

i,jeJ 

for all €ju ■ ■ ■, f j n e M. In the non-degenerate case we shah assume that the 
inequality in (3.3) is strict whenever the vector (£ 7l , ■ ■ ■ ,£j n ) is non-zero. De¬ 
fine the process ( i,uj ) >-> Xfiui) by Xfiui) = u>i, where w e O; = R 7 is given 
by uj = (. Let p = (/q) e R 7 be a map from / to M. There exists a 
unique probability measure P on the cr-held on if generated by (X, ) iel with the 
following property: 

E ^exp J] tjX^ j = exp j exp ^ <r(b j)&£jj . (3.4) 

This measure possesses the following additional properties: 

E (Xj) = Uj, j e I, and cov (A7*, Xf) = a{i, j), i,j e I. (3.5) 


90 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Notice that ^ £ u £ v cov (Xj u , Xj v ) ^ 0 whenever £ 1 ,... belong to K. For 

u,v =1 

a proof of this result we shall employ both Bochner’s theorem as well as Kol¬ 
mogorov’s extension theorem. Therefore let J = (ji ,..., j n ) be a finite subset 
of /, let Afc, 1 < k < r, be complex numbers and let £ k , 1 < k < r, be vectors in 
R n = R J . Put A^ = A*, exp (?’ X" = i C/ 7 /,) an d ^ U be an orthogonal matrix 
with the property that the matrix {UaU~ l {u,v)) v av=l has the diagonal form 
(s\ 0 ... 0 \ /rf\ 

. We also write (rj{,... = U 


(i UaU-\u,v )) 


0 4 


\0 0 0 4/ 


VC/ 


We may and do suppose that the eigenvalues si,..., s m , m ^ n, are non-zero 
and the others (if any) are 0. Then we get 

r _ ( \ n 

Yi A a-A/- exp - j J] a(j u ,j v ) (4 - 4) (4 - 4) 

M=1 \ lt,P = l 

X exp (-1 2 (4 - 4) 

\ u=l 

= J] A lV ex P (-i J (UaU -1 ) («w) W “ d9 

M=1 V 

= 2 A Cr ex P 2 s « W - d9 2 ) 

M=1 V U=1 / 

m 

9 2 «« M - 


2 Aa 4 exp 
k/= 1 

S AA 1 

/c,£=l 


n=l 


(V 2 ^) m nr. 


— 1 1 

= 1 5 U J J 


day...day 


exp 2 (4 - 4 ) ay j exp ^ 2 

(V / 27r)" i nr=i^I'"I < 


day ... day 


/ m 

2 A fc ex p (* 2 ^ 




k —1 


\ it=l 


exp 


M v C) 
V 


> 0 . 


(3.6) 


From Bochner’s Theorem 3.2 it follows that there exists a probability measure 
Ilj on M J such that, for all (eR", 


exp i V £ u x u dU 

V 41 / 


91 


Download free eBooks at bookboon.com 










Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


= exp 


( ~ l 2 J eX P ( 2 <*(ju,jv)£u& 

V u= 1 / \ u,v= 1 


(3.7) 


on a, = n jeJ n j by 


Define the probability measure irj on = [ y. eJ 


^ \\ ji > ) jn / / 'J \ j > 

where B is a Borel subset of R J . The collection (Elj, Aj. Pj) is a projective 
system, because let J 1 : = {jo} u J be a subset of I, which is of size 1 + size J = 
1 + n and let B be a Borel subset of R J . The Fourier transform of the measure 
B h-> II ji [R x B] is given by the function: 

, • • •, £j n ) ^ exp -z V £jXj n j, [R x dx] 

JrJ V jtj J 

2 tjXj n r [dy x dx] 

d<= T ) 


-f j 

Jm j Jm 


exp 


: ex P ( 2 ) exp ( “h 2 

V jeJ’ J V i,jeJ> 

■ eX P ( -*2^i ) exp ( ~\ 2 V&M&i 

\ jsJ J \ i,jeJ / 

■ f C '-r./b.O | I 



heW /t4 ^ / i 

Maastricht University 


Join the best at 
the Maastricht University 
School of Business and 
Economics! 

gjpj* 

• 33 rd place Financial Times worldwide ranking: MSc 
International Business 

• 1 st place: MSc International Business 

• 1 st place: MSc Financial Economics 

• 2 nd place: MSc Management of Learning 

• 2 nd place: MSc Economics 

• 2 nd place: MSc Econometrics and Operations Research 

• 2 nd place: MSc Global Supply Chain Management and 

Change 

Sources: Keuzegids Master ranking 2013; Elsevier ‘Beste Studies' ranking 2012; 

Financial Times Global Masters in Management ranking 2012 


Maastricht 

University is 
the best specialist 

university in the 
Netherlands 

(Elsevier) 

Master's Open Day: 22 February 2014 

www.mastersopenday.nl j 



Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


In the previous formula we used the equality £j 0 = 0 several times: J' = ,/u {jo}. 
It follows that Ilj [5] = II ji [E x B]. An application of the extension theorem 
of Kolmogorov yields the desired result in Example 3.4. 


Suppose that the matrix (cr(j u , j v ))u V =i non-degenerate (i.e. suppose that 
its determinant is non-zero) and let (a(u,v))™ v=1 be its inverse. Then 


P ((X jl ,...,X h )sB) 
(det a) 1 / 2 


(2n) n / 2 

exp 


H dx 1 . . . dx n lB(xi, ..., x n ) 

1 n N 
- 2 a ( u , v ) ( x u - du) 0 X v - dv) 


u.v—1 


(3.8) 


Equality (3.8) can be proved by showing that the Fourier transforms of both 
measures in (3.8) coincide. In the following propositions (Propositions 3.5 and 
3.6) we mention some elementary facts on Gaussian vectors. Gaussian vectors 
are multivariate normally distributed random vectors. 


3.5. Proposition. Let (12,T, E) be a probability space and let X 1 : 0 >—> E n % 
i = 1, 2, be random vectors with the property that the random vector X(cu) : = 
(X 1 (o;), X 2 (tv)) is Gaussian in the sense that (n = n\ + n<i) 


E 





A M=i 


(3.9) 


where the matrix cr(k,£) ki=1 is positive definite and where (/.q,... ,/r n ) is a vec¬ 
tor in M n . The vectors X 1 and X 2 are ¥-independent if and only if they are 
uncorrelated in the sense that 


E (XlXj) = E (Xl) E (X 2 ) (3.10) 

for all 1 < i < n\ and for all 1 < j < n 2 . 


Proof. The necessity is clear. For the sufficiency we proceed as follows. 


Put 


{X\X 2 ) = (X u ..., X ni ,X ni+u X ni+n2 ). 

Since the vectors X 1 and X 2 are uncorrelated (see (3.10)), it follows that 


n 1 


2 a ( k ^)t,k& + a (M)6c6- (3.11) 

k,£= 1 k,£— 1 k/=n\ + l 

From (3.9) it follows that 


E ( exp ( -i J] Z,kX k 


k=1 


n i 


ni+ri2 


= E j exp ( -i J] f k X k ) ) E j exp | —i J] 6 kX k 

k=n± + l 


k —1 


and hence that the random vectors X 1 and X 2 are independent. 


(3.12) 

□ 


93 


Download free eBooks at bookboon.com 



Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.6. Proposition. Let (f2,£F, P) be a probability space. 

(a) Let Q : M n be a linear map. If X : Q —► M m is a Gaussian 

vector, then so is QX. 

(b) A random vector X : Ll —*• M n is Gaussian if and only if for every 
(el n the random variable uj > (£,X(u)) is Gaussian. 

PROOF, (a) A random vector X is Gaussian if and only if the Fourier trans¬ 
form of the measure B >—> P (X e B) is of the form 

f ex P G* (f, h) ( a ^ ■ 

By a standard result on image measures the Fourier transform of the measure 
B i—► P ( QX e B ), where X : Q —► W 1 is Gaussian and where B is a Borel subset 
of K m , is given by 

£ - E [exp (-* <£, QX))] = E [exp (-* (Q% X))] 

= exp (-* (Q*£, p)) exp (aQ*£, Q*£)^J ■ (3.13) 

This proves (a). It also proves that the dispersion matrix of QX is given by 
QcrQ*. 

(b) For the necessity we apply (a) with the linear map Qx := (£,x), x e R n , 
where £ e E n is fixed. For the sufficiency we again fix £ e E". Since Y := (£, X) 
is a Gaussian variable we have 

E (exp (-i (f, X))) = E (exp (-W)) 

= exp (-fE(F)) exp ^-^E (Y - E(F)) 2 

= exp (—i (£, /i}) exp (cr£,£)^ , (3.14) 

where // = E(X) and where 

a(k,£) = cov(X k ,X e )=E(X k -E(X k ))(X e -E(X e )). 

This completes the proof of (b). □ 

3.7. Theorem. Let a : I x / —*• M be a positive-definite function and let p : I —> 

M be a map. There exists a probability space (Q, T, P) together with a Gaussian 
process ( t,u> ) > X t (uf) = X(t,cu), t e I, u e Ll, such that E(X t ) = [i t and such 

that cov(X s , X t ) = a(s, t ) for all s, t e I. 

Proof. The proof is essentially given in Example 3.4. □ 

We conclude this section with the introduction of Brownian motion and Brow¬ 
nian bridge as Gaussian processes. First we show that the function a : [0, go) x 
[0, go) -> E, defined by cr(u,v) = min(^, v), u, v e [0, oo), and, for t ^ 0 fixed, 
the function a t : [0,t] x [0,i] —> M, defined by cr t (u,v) = tmm(u,v) — uv, u , 
v e [0,i], are positive definite. 


94 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.8. Proposition. The functions a(u,v) = mm(u,v), u, v e [0, oo), a t (u,v) = 
tmin(u, v) — uv, u, v e [0, t\, and ctr(u, v) = - exp (— |u — v\)j u, v e R ; are 
positive definite. In addition, the function ao(u,v) defined by 

<Jq(u, v) = - exp(— (u + w)) (exp (2min(w, v)) — 1), u, v 5= 0, 
is positive definite. 


> Apply now 



REDEFINE YOUR FUTURE 

AXA GLOBAL GRADUATE 
PROGRAM 2015 


redefining /standards £ 



95 


^0 


Click on the ad to read more 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Proof. Let 0 = s 0 < si < s 2 < s 3 < • • • < s n < t and let Ai,..., X n be 
complex numbers. The following identities are valid: 

2 

t^Sj — Sj- 1 ) 


2 

3 = 1 


^2 Afc(t Sk) 


k—j 

n n n 


(t-SjW-Sj- 1) 

n n n r 

= 2 2 2 (* - Sfcj (* - *0 | 

j=lk\=iko = j ^ 


j = l Aji=J /c 2 =j 
n n min(fci,fc 2 ) 

= 2 2 2 Afc i Afc 2 

fci=l ^2 = 1 j —1 

n n 


Sj — i 


t — Sj t — i 


5 o — i 


t Sj t Sj_ 1 


ip &ki ) ip $k2 ) 


= 2 "7 (fa,fa) 

fc 1= lfc 2 = l s mm(/ci,/c 2 ) 

n n n 

= 2 2 Afei Afc 2 ^min(fci,fc 2 ) (t ^max(fci,fc 2 )) 2 Afei A& 2 , s^ 2 ) (3.15) 

fcl=lfc 2 = l /ci ,fc 2 = l 

and hence the function a t is positive definite. Since 

n n n ^ 

^ j AjAfet min(sj, s k ) ^ j AjA^(j^(sj, ^ ] A 

j,fc=i i,fc=i i=i 

it follows that the function cr is positive definite as well. 




In order to prove that the function is positive definite we first notice that 
the Fourier transform of the function t i—► exp (— |t|) is given by 


r oo rco 

e-^e^dt = 2 cos (ft) e“*dt 
J-oo Jo 


= 2Re f = 2Re : 

Jo 


1 


1-zf 1 + f 2 
Hence upon taking the inverse Fourier transform we obtain: 


(3.16) 


I e -|*-s| 

2 


J_ f°° exp (if(t-s)) 

2tt J_ m f 2 + 1 


(3.17) 


Let Ai,..., X n be complex numbers and let Si,... ,s n be real numbers. From 
(3.16) and (3.17) it follows that 


2 Wx exp (— | s k - 8e \) 
k,i =1 


i r i 

27T J.qo f 2 + 1 


n 

2 A fc exp (ifs fc ) 

k— 1 


df. 


(3.18) 


An easier way to establish the positive-definiteness of a^(u,v) is the following. 
For Ai,..., A n in C and for real numbers si,..., s n we write 

n 

2 AfcV exp (— |s fc - s*|) 
k,e =i 

n 

= 2 AfcVmin (exp(-(s fc - s/)),exp(r i (s< - s k ))) 

k/= 1 


96 


Download free eBooks at bookboon.com 


















Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


= exp(—s fc )A fc exp(—s^A^min (exp(2s fe ),exp(2s^)) 

k,£= 1 


f 


ex P ( 5 /c) / ^fcl[0,exp(2sjfc)] (0 


k —1 


2 

<%> 0 . 


A similar argument can be used to prove that the function a Q (u,v) is positive 
dehnite. □ 



NORWEGIAN EFMD 

BUSINESS SCHOOL AffiroiTro 


Empowering People. 
Improving Business. 


Norwegian Business School is one of Europe's 
largest business schools welcoming more than 20,000 
students. Our programmes provide a stimulating 
and multi-cultural learning environment with an 
international outlook ultimately providing students 
with professional skills to meet the increasing needs 
of businesses. 


B! offers four different two-yea i; full-time Master of 
Science (MSc) programmes that are taught entirely in 
English and have been designed to provide professional 
skills to meet the increasing need of businesses.The 
MSc programmes provide a stimulating and multi¬ 
cultural learning environment to give you the best 
platform to launch into your career 

* MSc in Business 


* MSc in Financial Economics 


* MSc in Strategic Marketing Management 

* MSc in Leadership and Organisational Psychology 


www.bi.edu/master 



Download free eBooks at bookboon.com 










Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


We now give existence theorems for the Wiener process (or Brownian motion), 
for Brownian bridge and for the oscillator process. 

3.9. Theorem. The following assertions are true. 

(a) There exists a probability space (Cl, £F, P) together with a real-valued, 
Gaussian process {b(s) : s ^ 0}, called Wiener process or Brownian mo¬ 
tion, such that E (b(s)) = 0 and such that E (b(si)b(s 2 )) = min(si,s 2 ) 
for all s i, s 2 ^ 0. 

(b) Fix t > 0. There exists a probability space (Cl, T, P) together with a real¬ 
valued Gaussian process (X t (s) : t ^ s ^ 0}, called Brownian bridge, 
such that E(W(s)) = 0 and such that 

E(X t ( Sl )X t (s 2 )) = min(s 1 , s 2 ) - ^ 
for all s i, s 2 e [0, t]. 

(c) There exists a probability space (Cl, T, P) together with a real-valued 
Gaussian process { q(s ) : s e M}, called oscillator process, which is cen¬ 
tered, i.e. E (q(s)) = 0 and which is such that 

E (q(s 1 )q(s 2 )) = | exp (- |si - s 2 \) 
for all si, s 2 e M. 

(d) There exists a probability space (Cl, T, P) together with a real-valued 
Gaussian process (X(s) : s ^ 0}, called Ornstein-Uhlenbeck process, 
such that E(X(s)) = 0 and such that 

E(X(si)X(s 2 )) = ^ exp(—(si + s 2 )) (exp(2 min(si, s 2 )) - l) (3.19) 

= ^ (exp (- jsi - s 2 |) - exp(-(si + s 2 ))) for all si, s 2 ^ 0. 


2. Brownian motion and related processes 


In what follows x and y are real numbers and so is p. Let {b(s) : s ^ 0} be 
Brownian motion (starting in 0) on a probability space (12, T, P) (i.e. E [b(s) ] = 
0 and E [b (si) b (s 2 )] = min (si, s 2 )). Then the process {x + b(s) + ps : s ^ 0} 
is a Brownian motion with drift p starting at x. Let (X t (s) : 0 ^ s < t} be 
a Brownian bridge on a probability space (Q,T, P). Then the process s >—> 

^1 — x + —y + -^t( s ) o < s ^ t is called pinned Brownian motion, namely 

pinned at x at time 0 and pinned at y at time t. Let {bj(s) : s ^ 0}, 1 < 
j G d, be d independent Brownian motions on the probability space (O. T, P). 
The process {(bi(s),..., ba(s)) : s ^ 0} is called d-dimensional Brownian motion. 
The characteristic function for d-dimensional Brownian motion starting at x e 
is given by: 






= i{#£ k ) min ( Sj ,s k ) 


(3.20) 


98 


Download free eBooks at bookboon.com 





Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


where xq = x and where 0 = so < si < • • • < Sn- A similar definition can be 
given for d-dimensional Brownian bridge and for the d-dimensional oscillator 
process. Notice that a d-dimensional process {b(s) = (&i(s),..., bd(s)) : s ^ 0} 
is a d-dimensional Brownian motion, starting at 0, on the probability space 
(f2,T, P) if and only if E (bj(si), 6 fc(s 2 )) = min(si, s 2 ). Let us prove the 
above equalities. 


3.10. Theorem. Let 0 = so < si < ■ • • < s n < co. Fix the vectors x and 
£i,..., £ n Put so = 0 and xq = x. The following equalities are valid: 


X ( 5 * “ s ^-i) 


1=1 


IN 

j=z 


X m(sj,s k )] 


j,k=1 

n 


dx i... I dx n exp -i X (€j,Xj) 

JR d JR d \ j = 1 

n 

n 


exp 


1 X j x j-i\ 

2 (sj — Sj_ij 


?=i (V 2?r ( s i “ s i-i)Y 

= exp | -i ex P ^ X (O^fc)min (.s J; .sy) j . 


(3.21) 

(3.22) 


(3.23) 


For a and b e M. d we write (a + bi ) 2 = |a | 2 + 2 i (a, b) — \b\ 2 . 


Proof. In order to see the first equality we write 


X (se - s e ~ 1 ) 


1=1 


IN 


J=L 




i=i 


3 l,32=Z 
n 


n min(ji ,j 2 ) 

= — s £-l) (Oi’^) = ( 5 min(ji,j 2 ) — 5 o) (£ 71 ? £ 72 ) 

iij2 = l ^=1 31,32=1 

n n 

— ^ 5 min(ji,j 2 ) (Cn 5 £ 72 ) • = ^ min («Sji, «Sj 2 ) (^ji, ^ 2 ) • (3.24) 

Ji 5J2 — 1 31,32 = 1 

For the second equality we proceed as follows: 


dx 1 ... dx n exp -i V (^, x?) 

\ jXi y 

n 

n 


7=1 (V 2 tt (sj -Sj.i))' 


exp - 


|Xj — Xj_l| 

2 (sj — Sj-ij 


(3.25) 


(substitute Xj = x + yi + • • • + yf) 


exp — 1 


n \ r r ( n / n \ 

X x ) dyi...\ dy n exp -z X ( X £b dr 

jTi / V r=i \fr £ , 


99 


Download free eBooks at bookboon.com 












Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


n 


7 = 1 (V 2?r ( S i - S j-l)) 
(substitute yg = (s£ — sg-i ) 1 ' 2 zg) 


exp 


I Vj 


exp — i 


2 ( s j s i-i) 


dz„ 


1 Xi fe’ X ) dz !••• 

j =1 J JR d JR 

ex P [ -i 2 (si - Se- 1) 1/2 2 ] fl —= 

£=i j=£ / i=i (V2- 


exp 


= exp ( -i (&, a;) ) exp -- 2 («* - «*-i) 


3 = 1 


£=1 




J=£ 


fl 7 7=^ f ^ exp ( ~ ( z e + i^/si - s t -i 2 & ) ]> 

£=i (v 27 t) y z \ j=£ ) J 



100 



Download free eBooks at bookboon.com 













Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


From Cauchy’s theorem, it then follows that 
dxi ... I dx n exp -i V xj 

JR d jR d \ j =1 

n 

n 


7=1 (V2tt ( Sj ,))' 


exp 


Xj — Xj- 1 | 


2 ( 5 j 


= exp 


n 


^ j ex p 2 - s ^-i) 

J* ^ dze exp (-i |^| 2 ) 


j=e 


= 1 (V27T) 


exp 


Y j ex P 2 ( Si ~ ^-i) 


2 & 

i=£ 


(first equality) 


= exp i 

This completes the proof of Theorem 3.10. 


1 Y (c> x )j ex p Y &■> &> min Sfc )j 


(3.26) 

(3.27) 

(3.28) 

(3.29) 

(3.30) 


□ 


In the following proposition we collect a number of interesting properties of the 
(finite dimensional) joint distributions of some of the Gaussian processes we 
introduced so far. 

3.11. Proposition. Let { b(s ) : s 5= 0} be d-dimensional Brownian motion and 
let 

OG(s) : 0 s t} 

be d-dimensional Brownian bridge. In addition let x and y be vectors in W 1 and 
let Q : M. d —» M. d be an orthogonal linear map. Also fix a strictly positive number 

a. 


(a) The joint distributions of the processes 

{b(s) : s > 0} and |s6 (i) : s > 0 

coincide. 

(b) The joint distributions of the processes 

{ b(as) : s ^ 0} and {\Iab(s ) : s ^ 0} 

coincide. 

(c) The joint distributions of the processes 


{q(s) : s e R} and <e s b 


,2s 


: s e 


coincide. 


101 


Download free eBooks at bookboon.com 










Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


(d) The joint distributions of the processes 


{X(t) : t ^ 0} and le t b 


f 2t - 1 


t ^ 0 


coincide. The process {X(t) : t ^ 0} also possesses the same joint dis¬ 
tribution as j^exp (—(t — s)) db(s ) : t ^ o|. 

(e) The joint distributions of the following processes also coincide: 


l — - ) x + -y + [I — -) b 


st 

t — s 


(i — f) x + fV + K s ) — o 


0 < s < t, 

(3.31) 

0 < s < t, 

(3.32) 

0 < s < t. 

(3.33) 


(f) The process { Qb(s ) : s ^ 0} d-dimensional Brownian motion and so 
its joint distribution coincides with that of {b(s) : s ^ 0}. 


Notice that instead of the “distribution” of a random variable or a stochastic 
process, the name “law” is in vogue. 

3 . 12 . Remark. Put b x (t ) = x + b(t). Then {b x (t) : t ^ 0} is Brownian motion 
that starts in x. PutX^t) = exp(— t)x + X(t). Then the process [X x (t] : t ^ 0} 
is the Ornstein-Uhlenbeck process of initial velocity x. 

3.13. Remark. The stochastic integral $*exp(— (t — s))db(s) can be defined as 

the L 2 -limit of Xu=i ( b(sj ) — 6 (s,-_i)), whenever max (sj — tends 

l^j^n 

to zero. Here 0 = s 0 < Si < • • • < s n = t is a subdivision of the interval [0, t], 

3.14. Remark. Let / : —*• C be a bounded Borel measurable function. Then 

E [f(X x (t))] is given by 

E [/ (X*(t))] = f / (e-’x + vT^F 5 !/) l dy , 

J ' 7 (Vtt) 

Moreover the Ornstein-Uhlenbeck process is a strong Markov process. 

3.15. Remark. Let { b x (t ) : f ^ 0} be Brownian motion that starts at x (and 
has drift zero). Fix s > 0. The processes 

{b x (s + t) — b x (s) : t ^ 0 } and {b x (t) — x : t ^ 0 } 

possess the same (joint) distribution. In order to see this one may calculate the 
Fourier transforms, or characteristic functions, of their distributions. 

3.16. Remark. Suppose that the Markov process 

{(O, T, P x ), (X(t),t > 0 ), OM ^ 0 ), (R", £)} (3.34) 

is Brownian motion in M n , and put Po(t, x. if) = - 7 - exp 

(2t vt) n/2 

t > 0, x, y e R n . Define the measure jiff by 

ho V x ( A ) = E x [lAPo(t ~ s,X(s), y)] , (3.35) 



102 


Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


where the event A belongs to T s = a (X(u) : u ^ s), for s < t. Since the process 
s i—> po(t — s,X(s),y) is a P x -martingale on the half-open interval 0 < s < t, 
it follows that the quantity is well-defined: its value does not depend 

on s, as long as A belongs to T s and s < t. From the monotone class theorem 
it follows that pgf can be considered as a positive measure on the cx-field 
given by T t _ = a (X(s) : 0 < s < t). Then the measure defined in (3.35) 
is called the conditional Brownian bridge measure. It can be normalized upon 
dividing it by the density po(t,x,y). 

PROOF of Proposition 3.11. Since all the indicated processes are d-dim¬ 
ensional Gaussian (the definition of a d-dimensional Gaussian process should be 
obvious: in fact in the discussion of 3.4 and in Theorem 3.7. The expected value 
p should be map from I to W l and the entries of the diffusion matrix a should 
be d x d-matrices), it suffices to show that the corresponding expectations and 
covariance matrices are the same for the indicated processes. In most cases this 
is a simple exercise. For example let us prove (f). Let q(k,£) be the entries of 
the matrix Q. Then 

d d 

E ((QKs 1 )) j 0 Qb(s 2 )) k S j = q(j,m) £ q(k,n)E(b m ( Sl )b n (s 2 )) 

m— 1 n=l 

d d d 

= 2 q(j,m) q(k,n)6 mtn mm(s 1 ,s 2 ) = £ q(j, m)q(k , m) min(si, s 2 ) 

771= 1 77=1 777=1 

= ( QQ *) (j, k) min(s!, s 2 ) = S jJc min(si, s 2 ). (3.36) 

This proves that {Qb(s) : s ^ 0} is again d-dimensional Brownian motion. This 
completes the brief outline of the proof of Proposition 3.11. □ 


103 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


In the proof of the existence of a continuous version of Brownian motion, we 
shall employ the following maximal inequality of Levy. 

3.17. Theorem. (Levy) Let X 1: ..., X n be random variables with values in W 1 . 
Suppose that the joint distribution of X i,..., X n is invariant under any change 

where 6j = +1. Put Sk = 

< 2P(|S' n | Ss A). (3.37) 


< 2P (S n ^ A). (3.38) 

Proof. We prove (3.37). Put 

fc-i 

A k = f| {\Sj\ < A} n {\S k \ > A} 

3 = 1 

and put A = ULi A k- Write T k = X r Then ^ = \ S n + \? k 

and so 

{\S k \>\}<={\S n \>\}u{\T k \>\}. 

Hence, from the invariance of the joint distribution of (A 1? • • • • X n ) under sign 
changes we see 

P(A k )=P(A k ,\S k \>\) 

A IP (A k , |iSn| ^ A) + P {A k , \T k | ^ A) = 2P (A k , IS^I ^ A). 

Since the events A k , 1 ^ k ^ n, are mutually disjoint, we infer 

P ( max I S k I ^ A 

n n 

= P(A) = P (A k ) < 2 P (A k , |S n | > A) < 2P (\S n \ > A). 

k =1 1 

This proves (3.37). The proof of (3.38) is similar and will be left to the reader. 
Altogether this completes the proof Theorem 3.17. □ 

Let {X(f) : t ^ 0} be Brownian motion on the probability space (12,T, P). We 
shall prove that there exists a continuous process {6(f) : f ^ 0}, that is indistin¬ 
guishable from the process {X(t) : t ^ 0}. This means that P(A(f) = 6(f)) = 1 
for all t ^ 0. 

3.18. Theorem. Let {X(t) : t ^ 0} be Brownian motion on some probability 
space (12, T, P). Then there exists stochastic process {6(f) : f ^ 0} which is P- 
almost surely continuous, and that is also a Brownian motion on the probability 
space (12, T, P) and that is indistinguishable from the process {X(s) : s ^ 0}. 
Here we suppose that T contains the P -zero sets. 


of sign (xi,...,x n ) faxi ,..., e n x n ), 
Then for any A > 0 


If d = 1, then 


P ( max I S k \ A A ) 


P ( max S k ^ A 

V l^k^n 


104 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Proof. Without loss of generality we may and do assume that the Brownian 
motion {X(s) : s ^ 0} has drift 0 and diffusion matrix identity. For the proof 
we shall rely on Theorem 3.17 and on the Borel-Cantelli lemma, which reads as 
follows. Let (A n : n e N) be a sequence of events with Xm=i P(Ai) < 00 • Then 
p (rCi ix , m A n ) = 0. In Theorem 3.18 we choose the sequence ( A n :neN) 
as follows. Let D be the set of non-negative dyadic rational numbers and put 


I 


A„ = < max 


sup 


\X(q)-X(k2- n )\>-[. 


I 0^fc<n2" ?eDn [ fe2 -n ( fe+1 ) 2 -n] 

An application of Theorem 3.17, with X(t+j82~ m ) — X(t+(j — l)82~ m ) replacing 
Xj yields 

P ( rnax m \X (t + jS2~ m ) - X(t) \ > a) < 2P(| X(t + 8) - X(t) \ ^ a) 

L exp H 


< ^jE\X(t + 5) -X(t)\ 4 = 26 1 


« 4 (V27T)" 


'2 \y\ ) \y\ d v 


28 2 (2d + d 2 ) 


cr 


In (3.39) we let m tend to infinity to obtain: 


sup | X(t + qS) — X(t)\ > a J < 

0^q^l,qeD 


28 2 (2d + d 2 ) 


a q 


(3.39) 


(3.40) 


Hence, with J n ^ = [k2 n , (k + 1)2 n ] (see also (3.46) below), and with t = k2 n 
and 8 = 2~ n , 


max sup \X(q) — X(k2 n )| > — 
o^k<n 2 n qeDn Jn k n 


< 


n2 n —1 

2 ' 

k =0 


sup \X(q)-X(k2~ n )\>- 

qeDnJ ntk n 


< n2' 


, 2 (2d2~ 2n + d 2 2 


—2n\ 


2 (2d + d 2 )n 5 6d 2 n 5 


(3.41) 


Since the sequence in (3.41) is summable, we may apply Borel-Cantelli’s lemma 
to conclude that P-almost surely, for all t > 0, the path q >-» X(q) is uniformly 
continuous on D n [0, t] . So it makes sense to define the P-almost surely continu¬ 
ous function s i—> b(s) by b(s) = lim ? ^ Sj96 D X(s). It is not so difficult to see that 
the process {b(s) : s 5= 0} is also a Brownian motion. In fact let £i,..., be n 
vectors in M, d and suppose 0 = so < si < • • • < s n . Then we choose sequences 
0 = q 0 (m) < si < qi(m) < s 2 < <? 2 (^) < s n -1 < < q n -i(m) < s n < q n (m ), 

m e N, in D, such that qk(rn) [ Sk, if rn tends to infinity and this for 1 < k < n. 
Since {A"(s) : s A 0} is d -dimensional Brownian motion we have 


E 


exp 


(&,X(q k (m))) 


k =1 


105 


Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


= exp | — - > , 2 
k—j 

In (3.42) we let m tend to go to obtain 


1 n 

3 = 1 


- qj-i(m)) . 


(3.42) 


E ^exp (tk,b(s k ))^j j =exp(-ij] 


k—j 


(sj-Si-i) ■ (3.43) 


This equality shows that {b(s) : s ^ 0} is a Brownian motion. In order to prove 
that it cannot be distinguished from the process (X(s) : s 5= 0}, we notice first 
that 

E (exp (-* <£, X(t + s)~ X(t)))) = exp (-g |£| 2 sj , ^ E d . (3.44) 
Hence, for (el' 1 , 

E |exp (-i (£,X(t))) - exp (-* (f, b(t)))\ 2 

= E(2 - exp (* (£,X(t) - b(t ))) - exp (—i (£,X(t) - b(t )))) 

= hm (2-E(exp (~i(£,X(q) — X(t)))) — E (exp (—i (£,X(t) — X(q))))) 

qit,qeD 


= 2 — 2 hm exp ( — |£| 2 (q»-1) ) = 0. 
qlt,qeD V 2 V ’ J 


(3.45) 


From (3.45) it readily follows that the processes {X(s) :s ^ 0} and {b(s) :s ^ 0} 
cannot be distinguished. □ 



Brain power 


By 2020, wind could provide one-tenth of our planet’s 
electricity needs. Already today, SKF’s innovative know¬ 
how is crucial to running a large proportion of the 
world’s wind turbines. 

Up to 25 7o of the generating costs relate to mainte¬ 
nance. These can be reduced dramatically thanks to our 
(^sterns for on-line condition monitoring and automatic 
lul|kation. We help make it more economical to create 
cleanSkdneaper energy out of thin air. 

By shsfefig our experience, expertise, and creativity, 
industries can boost performance beyond expectations. 

Therefore we need the best employees who can 
kneet this challenge! 


Power of Knowledge Engineering 


Plug into The Power of Knowledge Engineering. 
Visit us at www.skf.com/knowledge 


106 

Download free eBooks at bookboon.com 







Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


In this proof of Theorem 3.18 we have also used the fourth moment 

E|X(t + s) -X(£)| 4 . 

From (3.44) it follows that this moment does not depend on t and hence 
E | X(t + s) - X(t) | 4 = E | X(s) - X(0)| 4 = E |X(s)| 4 . 

A way of computing E |X(s)| 4 is the following: 

E |X( S )| 4 = E (exp (-i «, AT(«)») | £ . 0 

= (2 ds 2 - 2s 3 |£| 2 + s 4 |£| 4 - 2ds 3 |£| 2 + d 2 s 2 ) exp (-1 |£| 2 ) | ?=0 

= (2 ds 2 - 2 (d + 1 )s 2 |£| 2 + s 4 |£| 4 + d 2 s 2 ) exp (-1 |£| 2 s) | {=0 
= 2 ds 2 + d 2 s 2 . (3.46) 


In the following theorem we compute the finite dimensional distributions of d- 
dimensional Brownian motion starting at 0 and possessing drift fi. Therefore 

we define the Gaussian kernel p{t ) x, y ) by p(t, x, y) = -— exp 

(2tt/ ) 

Notice the Chapman-Kolmogorov identity 



p(s, x, z)p(t , y) = p(s + t, x, y)p 


st 

s t 


sx + ty \ 
s + t ,Z ) ‘ 


3.19. Theorem. Let { b(s ) : s ^ 0} be d-dimensional Brownian motion with dif¬ 
fusion matrix identity, with drift 0 and which starts in 0. Let f n be 

bounded Borel measurable functions on R. d and let 0 = s 0 < Si < • • • < s n . 
Then 


^ + K s j ) + l LS j) 


0=1 


r r n n 

... dx 1 ...dx n y\f j (x j )Y\p(& 

JR d JR d + 1 


(3.47) 


Sj —15 X j —1 llSj- U Xj /iSj) , 


where x 0 = x. 


3.20. Remark. Equality (3.47) determines the joint distribution of the process 

|^A(s) := x T 6(s) T ys \ s ^ 0}. 

This will follow from the monotone class theorem. The vector y is the so-called 
drift vector and the process X = {X(s) : s ^ 0} starts at x in 


107 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.21. Remark. Another consequence of equality (3.47) is the fact that the 
random vector b(t) — b(s), t > s fixed, is independent of the cr-field generated by 
the process {b(a) : 0 < a ^ s}. This fact also follows from (3.48) below together 
with the monotone class theorem. For £ e M d , e M d , 1 < j < n, t > s ^ s n > 
■■■ si > «o = 0 the following identity is valid and relevant: 


E ( exp ( -i (f, bit) - b(s)) - i ^ (&, K s j)) 


3 = 1 


ex P (l^l 2 (t - s) - \ 2 min ( S i> S fc) <&>&) 


j,k=l 


= E (exp (-* (f, b(t) - b(s)))) E ( exp ( -i J] 6(sj-)) 

i=i 


(3.48) 


In other words a Brownian motion (diffusion matrix identity) is a Gaussian 
process {b(s) : s ^ 0} with independent increments b(t) — b(s), t > s, with 
mean p(t — s) and covariance matrix co v(bk(t) —bk(s), be(t) —be(s)) = 5k/(t — s). 


PROOF. Theorem 3.10 shows that the equality in (3.47) holds for functions 
fj, 1 < j < n, of the form 

fj( x ) = J exp (-*(£, z))d/Ji(0» (3-49) 

where fij = 6^ is the Dirac measure Fubini’s theorem then implies that 
(3.47) also holds for functions fj, 1 A j A n, of the form (3.49) with l-ij(B') = 
w ith gj e L l (R d ), K jT n. Since, by the Stone-Weierstrass the¬ 
orem functions of the form (3.49) with /q ( B ) = <p(£) df where gj e L 1 (PA), 

are dense in the space Co (R d ), it follows that (3.47) holds for functions fj e 
Co ( M . d ), 1 A j A d. By approximating indicator functions of open subsets from 
below by functions in Co (PA) it follows that the equality in (3.47) holds for 
functions fj which are indicator functions of open subsets. A Dynkin argument 
(or the monotone class theorem) then shows that (3.47) is also true if the func¬ 
tions fj are indicator functions of Borel subsets Bj, 1 < j < n. But then this 
equality also holds for bounded Borel functions fj, 1 A j A n. 

This completes the proof of Theorem 3.19. □ 


Next we want to define standard Brownian motion, with drift vector p, that 
starts at rel* 1 . 

3.22. Definition. The standard Brownian motion, starting at x e R. d and with 
drift p is defined as the canonical Gaussian process {A(.s) : s P 0} defined on 
(D, T, P x ) with the property that the increments X(t + h) — X(t) are mutually 
independent and have P x -expectation ph. Moreover it starts P x -almost surely at 
x, i.e. P x (A(0) = x) = 1 and co <v (Xk(t + h) — Xk(t),Xg(t + h) — Xg{t + h)) = 
6k,eh. The covariance is of course also taken with respect to P x . The process is 
canonical because for fl we take fl = C ([0, co), W l ) , for X(t) we take X(t)(u) = 


108 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


cu(£), uo e Q. For £F we take the cr-field in generated by the state variables 
{X(s) : s ^ 0}. For all this we often write 

{(ft, T, P x ), (X(t) (R d , 3 )} . 

Here the shift or translation operators $ t , t F 0, are defined by i) t (oj)(s) = 
u>(s + t), uj e ft. We also introduce the filtration (SF t : t F 0) defined as the full 
history: “J t is the cr-field generated by the variables X(s), 0 < s < t. We also 
shall need the right closure 3y + defined by 3y + = n s>t ?s. 

In the following result we give some interesting martingale properties for Brow¬ 
nian motion. 

3.23. Proposition. Let 

{(ft, T, P x ), (X(t) : t> 0), (d t : t> 0), (M d , 3 )} 

be standard Brownian motion that starts at x e R d and that has drift p. For 
t > s the variable X (t) — X(s) does not depend on the o-field T s . The following 
processes are F x -martingales with respect to the filtration T t , t 5= 0: 

1 > X(t) — tp, t i—> |X(f) — tp | 2 — dt. 

Proof. The fact that the increment X(t) — X(s) does not depend on the 
past T s is explained in Remark 3.21 following Theorem 3.19. The other asser¬ 
tions are consequences of this. Let s and t be positive real numbers. Then we 
have 

(X(s + t) — (s + t)p | T s ) — (X(s) — spi) 

= E x (X(s + t)-X(s) | T s )-tp 

(increments are independent of the past) 

= E x (W(s + t) — X(s)) — tp = tp — tp = 0. (3.50) 

Similarly, but more complicated, we also see 
Ex [| X{s + t)-(s + t)p\ 2 - d(s + t)~ |X(s) - sp\ 2 + ds | T s ] 

= E x [|X(s + t) — X(s) — tp + X(s) — sp\ 2 — dt — |X(s) — sp\ 2 | T s ] 

= Ex [|X(s + t) — X(s) — tp\ 2 — dt + (X(s + t) — X(s) — tp,X(s ) — sp) | T s ] 

(use (3.50)) 

= Ex [|X(s + t) — X(s) — tp\ 2 | T s ] — dt 
(again an application of (3.50)) 

= Ex [|X(s + t)~ X(s) - tp\ 2 ] - dt 

d 

= ^ cov (Xk(s + t) — Xk(s), Xk(s + t) — Xfc(s)) — dt = dt — dt = 0. (3.51) 

k =1 

This proves Proposition 3.23. □ 


109 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


So far we have looked at Brownian motion as a Gaussian process. On the 
other hand it is also a Markov process. We would like to discuss that now. In 
fact mathematically speaking equality (3.47) in Theorem 3.19 is an equivalent 
form of the Markov property. As already indicated in Remark 3.20 following 
Theorem 3.19 the monotone class theorem is important for the proofs of the 
several versions of the Markov property. 

3.24. Definition. Let Cl be a set and let S be a collection of subsets of Cl. Then 
S is a Dynkin system if it has the following properties: 

(a) 9e§; 

(b) if A and B belong to S and if A 3 B, then A\B belongs to S; 

(c) if (A n : n e N) is an increasing sequence of elements of S, then the union 
Un= i A* belongs to S. 









Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


The following result on Dynkin systems is well-known. 

3.25. Theorem. Let M be a collection of subsets of Cl, which is stable under 
finite intersections. The Dynkin system generated by M coincides with the a- 
field generated by M. 

3.26. Theorem. Let Cl be a set and let M be a collection of subsets of Cl, which 
is stable (or closed) under finite intersections. Let J~C be a vector space of real 
valued functions on Cl satisfying: 

(i) The constant function 1 belongs to TC and 1 a belongs to TC for all A e M; 

(ii) if (f n : n e N) is an increasing sequence of non-negative functions in TC such 
that f = sup neN f n is finite (bounded), then f belongs to TC. 

Then TC contains all real valued functions (bounded) functions on Cl, that are 
cr(M) measurable. 

Proof. Put D = [A ^ Cl : 1 a e IT}- Then by (i) Cl belongs to D and 
D ^ M. If A and B are in D and if B ^ A, then B\A belongs to D. If 
( A n : n e N) is an increasing sequence in D, then l U A n = sup n 1 A n belongs to 
T) by (ii). Hence D is a Dynkin system, that contains M. Since M is closed 
under finite intersection, it follows by Theorem 3.25 that D 3 <r(M). If / > 0 
is measurable with respect to <r(M), then 

/ = sup2- n ^! 1 ( 3 - 52 ) 

n J 1 

Since 1 {f^j 2 ~ n }, j , n e N, belong to we see that / belongs to ! Ji. Here we 

employed the fact that cr(M) c X>. If / is a(M)-measurable, then we write / 
as a difference of two non-negative a( M)-measurablc functions. □ 

The previous theorems (Theorems 3.25 and 3.26) are used in the following form. 
Let Cl be a set and let ( E t , Lf) ieI be a family of measurable spaces, indexed by an 
arbitrary set /. For each i e I, let S. ( denote a collection of subsets of E,, closed 
under finite intersection, which generates the cr-field £ ( , and let f, : Cl —* E, be 
a map from Cl to E{. In this context the following two propositions follow. 

3.27. Proposition. Let M be the collection of all sets of the form 

fi 1 (A) ) Ai e 

ieJ 

i e J, J c / 1 J finite. Then M is a collection of subsets of Cl which is stable 
under finite intersection and cr(M) = a (fi : i e I). 

3.28. Proposition. Let TC be a vector space of real-valued functions on Cl such 
that: 

(i) the constant function 1 belongs to J~C; 

(ii) if ( h n : n e N) is an increasing sequence of non-negative functions in TC 
such that h = sup n h n is finite (bounded), then h belongs to J~C; 

(iii) TC contains all products of the form Idiej 1 a, 0 fi, J *= I, J finite, and 
Aj e §i, i e J . 


ill 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Under these assumptions TC contains all real-valued functions (bounded) func¬ 
tions in cr(fi : i e I). 


The Theorems 3.25 and 3.26 and the Propositions 3.27 and 3.28 are called the 
monotone class theorem. 

In the following theorem T is the cx-field generated by {X(s) : s A 0} and % is 
the cr-field generated by the past or full history, i.e. = cr (X(s) : 0 A s T t\. 
If T is an (9y+)-stopping time we write 

J T+ = f){AeT:An{T^t}e % + }. 

tjs o 

An (3y+)-stopping is an T-measurable map T from Q to [0, co] with the property 
that {T < t} belongs to 3 r t + for all t ^ 0. 

Notice that stopping times may take infinite values. Often this is very interest¬ 
ing. 

3.29. Theorem. Let {(f2,T, P*) , (X(t) : t ^ 0), (& t : t ^ 0), (E d ,!B)}, x e R d , 
be d-dimensional Brownian motions. Then the following conditions are verified: 

(ai) For every a > 0, for every t ^ 0 and for every open subset U ofM. d , 
the set {jeRh P x (X(f) 6 U) > a) is open; 

(a 2 ) For every a > 0, for every t ^ 0 and for every compact subset K of 
the set {x e : P x (X(t) e K) ^ aj is compact; 

(b) For every open subset U of R d and for every x e U, the equality 
limP x (X(t) e U) = 1 is valid. 


Moreover d-dimensional Brownian motion has the following properties: 

(i) For all t ^ 0 and for all bounded random variables Y : Ll —► C the equality 

E x (Totf t |T t )=E x( p(T) (3.53) 

holds P x -almost surely for all x e M d ; 

(ii) For all finite tuples 0 < t\ < t 2 < ... < t n < go together with Borel subsets 
B\,..., B n ofR d the equality 

IPa; (Al(tl) £ B\,... ,X(t n ) e B n ) 

I • • • I I P(fn tn— 1 ; •^n—li dx r (jP(t n _\ t n _ 2 , X n —2i dx n —fi 
J B± J B n —i J B n 

... P(t ,2 — dx 2 )P(t u x, dx i) (3.54) 

is valid for all x e (here P x (A(f) e B) = P(t, x, B)); 

(in) For every (£F t+ )-stopping time T and for every bounded random variable 
Y : fl —> C the equality 

E x (Yod T \ T t+ ) = E X (T) 00, (3.55) 

holds P x -almost surely on {T < oo} for all x e R d ; 


112 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


(iv) Let ¥>i be the Borel field of [0, go). For every bounded function F : [0, go) x 
Cl —> C, which is measurable with respect to and for every (%+)-stopping 

time T the equality 

E x ({u> ~ F(T(u),Mu))} I ^t+) = {cS ~ E* (IV)) W - F(T(u'),u)}} 

(3.56) 

holds P x -almost surely {T < go} for all leKf 

Since (/-dimensional Brownian motion verifies (ai), (a 2 ) and (b), the properties 
in (i), (ii), (iii) and (iv) are all equivalent. Properties (i) and (ii) are always 
equivalent and also (iii) and (iv). The implication (iii) => (ii) is also clear. For 
the reverse implication the full strength of (ai), (a 2 ) and (b) is employed. The 
fact that Brownian motion possesses property (ii) is a consequence of Theorem 
3.19. In fact the right continuity of paths is very important. Since we have 
proved that Brownian motion possesses continuous paths P x -almost surely this 
condition is verified. Property (i) is called the Markov property and property 
(iii) is called the strong Markov property. Equality (3.56) is called the strong 
time-dependent Markov property. We shall not prove this result. It is part of the 
general theory of Markov processes and their sample path properties. It is also 
closely connected to the theory of Feller semigroups. As in Theorem 3.29 let 
{(Cl, £T, P x ) ,(X(t) : t ^ 0), (i?i : t ^ 0), (R d , B)} be Brownian motion starting 
in x. In fact the family of operators {P(t) : t ^ 0} defined by [P(t)f] (x) = 
(f(X(t)), f e L co (R d ), t ^ 0, is a Feller semigroup, because it possesses the 
properties mentioned in the following definition. 

In what follows E is a second countable locally compact Hausdorff space, e.g. 
E = M'b We define a Feller semigroup as follows. 

3.30. Definition. A family {P(t) : t ^ 0} of operators defined on L co (E ) is a 
Feller semigroup, or, more precisely, a Feller-Dynkin semigroup on Cq(E) if it 
possesses the following properties: 

(i) It leaves Cq(E) invariant: P(t)Co(E ) c Cq(E) for t ^ 0; 

(ii) It is a semigroup: P(s + 1) = P(s) o P(t) for all s, t ^ 0, and P( 0) = /; 

(iii) It consists of contraction operators: ||T > (i)/|| 00 A ||/’|| 00 for alH 5= 0 and 
for all / e C'o(A); 

(iv) It is positivity preserving: / ^ 0, / 6 Cq(E), implies P(t)f ^ 0; 

(v) It is continuous for t = 0: lim t | 0 [P(t)f] ( x ) = f( x ), for all / e Cq(E) 
and for all x e E. 

In the presence of (iii) and (ii), property (v) is equivalent to: 

(v') lim ti o | \P(t)f ~ fL = 0 for all / e C 0 (E). 

So that a Feller semigroup is in fact strongly continuous in the sense that, for 
every / e C 0 (E), 

lim \\P(s)f - P(t)f\\ x = 0. 

S^-t 


113 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


It is perhaps useful to observe that Co(E), equipped with the supremum-norm 
||• Ioq is a Banach space (in fact it is a Banach algebra). A function / : E —* C 
belongs to Cq(E) if it is continuous and if for every e > 0, there exists a compact 
subset K of E such that \f(x)\ < e for x $ K. We need one more definition. 
Let {P(t) : W 0} be a Feller semigroup. Define for U an open subset of E, the 
transition probability P(t , x, U), t ^ 0, x e E, by 

P(t, x, U ) = sup {[P(t)u] (x) : 0 ^ u < 1[/, u e Cq(E)} . 

This transition function can be extended to all Borel subsets by writing 
P(t,x,K) = utf{P(t,x,U):U open U 3 R ), 
for K a compact subset of E. If D is a Borel subset of E, then we write 
P(t, x, B ) = inf {P(t, x,U) : U 3 B,U open } 

= sup {P(t, x,K) : K ^ B,K compact } . 

It then follows that the mapping B P(t,x,B ) is a Borel measure on £, the 
Borel field of E. The Feller semigroup is said to be conservative if, for all t ^ 0 
and for all x e E, P(t, x, E) = 1. 



qaiteye 

Challenge the way we run 


EXPERIENCE THE POWER OF 
FULL ENGAGEMENT... 


RUN FASTER. — r 

RUN LONGER.. 

RUN EASIER... > 


ww • 




114 

Download free eBooks at bookboon.com 







Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


We want to conclude this section with a convergence result for Gaussian pro¬ 
cesses. 

3.31. Proposition. Let : s e I^J, n e N, be a sequence of Gaussian pro¬ 

cesses. Let (X s : s e I) be a process with property that 

E [X U X V ] = lim E [V re) V re) l, for all u and v in I and 

E [X u ] = lim E [ V n) l, for all uel. 

n —>00 L J 

Also suppose that, in weak sense, 

for all finite subsets (ui,..., u m ) of I. Then the process (X s : s e I) is Gaussian 
as well. 

Proof. Let £i,...,be real numbers, and let ui,... ,u m be members of 
I. Then 

E ^exp ^ f k X^> 

/ m -t ui 

= exp -i 2 &E (X£>) - - 2 &&eov (*£>,*«) 


k =1 


k,e =i 


Next let n tend to infinity to obtain (here we employ Levy’s theorem on weak 
convergence): 

E ( exp ( -i ^ £kX Uk 


k =1 
m 


= exp f -i 2 &E [X Uk ] - - 2 &&E [(X ut - E (X„J) (X u , - E (A'„,))] \ . 
I k=l Z M=i 

So the result in Proposition 3.31 follows. 


□ 


For Levy’s weak convergence theorem see Theorem 5.42. 

3.32. Theorem. Brownian motion is a Markov process. More precisely, (3.53) 
is satisfied. 


Proof. Let F be a bounded stochastic variable. We have to show the 
following identity: 

E x [F o3 t | 3y] = E X (t) [F] , P^-almost surely. 

It suffices to show that 

E* [F o3 t x G] = E x [E x(t) [F] G] (3.57) 

for all bounded stochastic variables F and for all bounded dVmeasurable func¬ 
tions G. By an application of the monotone class theorem twice (see Proposi¬ 
tion 3.28) it suffices to take F of the form F = Iljli fj (X ( Sj )), 0 < Si < s 2 < 


115 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


• • • s m < oo, and G of the form G = =1 9j (X (; tj )), 0 < ti < t 2 < ■ • • < t n < t. 

Here fi,... ,f m and gi , • • •, g m are bounded continuous functions from M, d to R 
or C. Once the monotone class theorem is applied to the vector space 

{G e L 00 (0, 5 t ) : E x [E x(t) [F] x G\ = E* [F o x G]} , 
where F is as above, and once to the vector space 


{F e L 00 (Q, T) : E X (t) [F] =E x [Foi3 t | J t ] , P x -almost surely} . 

Then (3.57) may be rewritten as 

E* [h {X ( Sl + t)) • • • f m (X (s m + t)) g i (X (G)) ■■■g n (X (f n ))] 

= E x [E x(t) [h (X ( 5l )) • • • f m (X (s m ))] c h (X (G)) ■■■g n (X (*„))] . (3.58) 

Put Tj = tj, 1 GjGn, r n+k = s k + t, 1 < k G m; hj = 9j , HjX n, h n+k = f k , 
1 < k < m. By definition we have 

E* [fi (X ( Sl (X ( Sm + t)) 9l (X (H)) (X (4))] 

= [A? (Tl)) ' * ‘ ^n+ra (X (^”n+m))] 

h-n+m (■T-n+m) 

P (^1) ’"T) ■Tl) ‘ P ('T’n+m '^n+m— 1> •T-n+m—1) %n+m) • (3.59) 

Next we rewrite the right-hand side of (3.58): 

E* [E x(t) [/! (X ( Sl )) • • • f m (X (s m ))] .91 (X (ti)) • • • g n (X (* n ))] 


= 


hi (X (ti)) • • • hn (X (£„)) J . . . J efi/i . . . d'ljmf l (hi) • • • fm (Urn) 

P 1 : X (t) , Hi) ’ ' ' P (^m 1 5 Um—l 1 Urn) 

• • • dz n gi (^1) • • • g n {%n) 

p (ii , x, Zi ) • • • p ( t n t n —\, , 2^) dzp (i z) 

--dy. „/,(»:)• ••/„(!/„) 

p(si,z,yi) ■■■p(s m - s m -i,hm-i,hm) 


(Chapman-Kolmogorov: ^p(t — t n , z n , z) p (si, z,y) dz = p(si + t — t n , z n ,y)) 


j ' (2l) (</.)• 


p (tl, X, Zi) • • • , z n ) 

P (^1 T t 2/l) ’ ‘ ’ P (^m 1 5 Um—l •> Vm) 

= E x [A (X (si + t)) • • • fm (X (s m + A) 91 (X (A)) ---g n {x ( t n ))]. (3.60) 

Since the expressions in (3.59) and (3.60) are the same, this proves the Markov 
property of Brownian motion. The proof of Theorem 3.32 is now complete. □ 


116 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3. Some results on Markov processes, on Feller semigroups and on 

the martingale problem 

Let E be a second countable locally compact Hausdorff space, let E A be its 
one-point compactification or, if E is compact, let A be an isolated point of 
E A = E (J A. Define the path space 0 as follows. The path space 0 is a subset 

of with the following properties: 

(i) If c o belongs to Cl, if t ^ 0 is such that u){t ) = A and if s ^ t, then 
u>(s) = A; 

(ii) Put CM = inf {s > 0 : uj(s) = A} for uj e Cl. If cj belongs to Cl, then uj 
possesses left limits in E A on the interval [0, C] and it is right-continuous 
on [0, oo); 

(iii) If uj belongs to O, if t ^ 0 is such that uj(t) belongs to E, then the closure 
of the set {uj(s) : 0 < s ^ t} is a compact subset of E or, equivalently, 
if t > 0 is such that u ;(i—) = A and if s ^ t, then u ;(s) = A. 

3.33. Definition. The random variable C, defined in (iii.) is called the life 
time of uj. A path uj e Cl is said to be cadlag on its life time. We also define 
the state variables X(t) : 0 —► E A by X(t)(uj) = X(t,u) = uo(t), t ^ 0, 
uo e Cl. The translation or shift operators are defined in the following way: 
[$ t (u;)](s) = uj(s + t), s, t > 0 and uj e Cl. The largest subset of (£ ,A )'-°’ 00 ^ 
with the properties (i), (ii) and (iii) is sometimes written as D ([0, oo), £’ A ) or 
as D e a ([0, oo)). Let T be a cr-field on Cl. A function Y : Cl —> C is called a 
random variable if it is measurable with respect to T. Of course C is supplied 
with its Borel field. The so-called state space E is also equipped with its Borel 
field £ and E A is also equipped with its Borel field £ A . The path uj/\ is given 
by wa(s) = A, s ^ 0. Unless specified otherwise we write O = D([0,oo) ,E A ). 
The space D([0, oo), E A ) is also called Skorohod space. In addition let T be a 
cr-field on Cl and let {3q : t ^ 0} be a filtration on Cl. Suppose £F t c T, t ^ 0, 
and suppose that every state variable X(t), t ^ 0, is measurable with respect to 
“Jf (This is the case where e.g. is the cr-field generated by (AT(s) : s < t}.) 


117 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


We also want to make a digression to operator theory. Let L be a linear operator 
with domain D(L ) and range R(L ) contained in C${E). The operator L is said 
to be closable if the closure of its graph is again the graph of an operator. Here 
the graph of L, G(L), is defined by G(L) = {(/, Lf) : f e D(L)}. Its closure 
is the closure of G(L) in the cartesian product Cq{E ) x Co(E). If the closure 
of G(L) is the graph of an operator, then this operator is, by definition, the 
closure of L. It is written as L. Sometimes L is called the smallest closure of L. 


3.34. Definition. Let L be a linear operator with domain and range in Cq(E). 


(i) The operator L is said to be dissipative if, for all A > 0 and for all 

/£ £>(£), 

II A/ - L/t A ||/L . (3.61) 

(ii) The operator L is said to verify the maximum principle if for every / e 
D(L) with sup {Re f(x) : x e E) strictly positive, there exists x 0 e E 
with the property that 


Re f{x o) = sup {Re fix) : x e E) and Re Lf(x o) < 0. 


(iii) The martingale problem is said to be uniquely solvable, or well-posed, 
for the operator L, if for every x e E there exists a unique probability 
P = Pa, which satisfies: 

(a) For every / 6 D(L) the process 

}(X(t)) - f(X( 0)) - f Lf(X(s))ds y t S 0, 

Jo 


is a P-martingale; 

(b) P(X(0) =x) = l. 

(iv) The operator L is said to solve the martingale problem maximally if 
for L the martingale problem is uniquely solvable and if its closure L is 
maximal for this property. This means that, if L\ is any linear operator 
with domain and range in Cq(E), which extends L and for which the 
martingale problem is uniquely solvable, then L coincides with L\. 

(v) The operator L is said to be the (infinitesimal) generator of a Feller 
semigroup 

{P(t ) : t> 0 }, 


if L 


s-lim 

t[ 0 


m -1 

t 


This means that a function / belongs to D(L ) 


whenever Lf := lim 
40 


mi - / 

t 


exists in Cq{E). 


An operator which verifies the maximum principle is dissipative (see e.g. [ 141 ], 
p. 14) and can be considered as kind of a generalized second order derivative 
operator. A prototype of such an operator is the Laplace operator. An operator 
for which the martingale problem is uniquely solvable is closable. This follows 
from (3.125) below. Our main result says that linear operators in Cq{E) which 
maximally solve the martingale problem are generators of Feller semigroups and 
conversely. 


118 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.35. Definition. Next suppose that, for every x e E, a probability measure 
P,„ on T is given. Suppose that for every bounded random variable Y : 0 —*• M 
the equality E x (Y o f} t \ T t ) = Ex(p(T) holds P,-almost surely for all x e E 
and for allt ^ 0. Then the process 

m % P»), (X(t) :t>O),(0 t :t>O),{E,e.)} 

is called a Markov process. If the fixed time t may be replaced with a stopping 
time T, the process {(f2, T, P x ), (X(t) : t ^ 0), (# t : t ^ 0), (.E , £)} is called a 
strong Markov process. By definition Pa (A) = 1a(<^a) = d WA (d). Here A 
belongs to T. If the process {(fi, T, P^) ,(X(t) : t ^ 0), (# t : f ^ 0), (A, £)} is a 
Markov process, then we write 

P(f, x, .B) = F x (X(t) e B), t ^ 0, Be£, xeE, (3.62) 

for the corresponding transition function. The operator family {P(t) : t ^ 0} is 
defined by [P(t)f](x) = E x (f(X(t))), f £ C 0 {E). 

An relevant book on Markov processes is Ethier and Kurtz [54]. An elementary 
theory of diffusions is given in Durrett [45]. In this aspect the books of Stroock 
and Varadhan [133], Stroock [132] [131], and Ikeda and Watanabe [61] are of 
interest as well. 

We shall mainly be interested in the case that the function P(t)f is a mem¬ 
ber of Cq(E) whenever / is so. In the following theorem T is the cr-field 
generated by |A(s) : s ^ 0} and ‘J t is the a-field generated by the past or 
full history, i.e. T t = <t{X(s) : 0 < s ^ t}. If T is a stopping time we write 
T r+ = fU {A e T : A n {T < t} e T t+ }. 

3.36. Theorem. Let (0,T, P x ), x e E, be probability spaces with the following 
properties: 

(ai) For every a > 0, for every t ^ 0 and for every open subset U of E, the 

set {x e E : P x {X{t) e U) > a} is open; 

(a 2 ) For every a > 0, for every t ^ 0 and for every compact subset K of E, 

the set {x e E : P x (X(t) e K) ^ a} is compact; 

(c) For every open subset U of E and for every x e U, the equality 

limP x (X(t) £ U) = 1 is valid. 

H o 

The following assertions are equivalent: 

(i) For all t ^ 0 and for all bounded random variables Y : Cl —> C the equality 

E x (Yof> t \? t ) = E x{t) (Y) (3.63) 

holds Pa ,-almost surely for all x e E; 

(ii) For all finite tuples 0 < t\ < f 2 < • • • < t n < oo together with Borel subsets 
Bi,... ,B n of E the equality 

Pr (X(ti) e B\,... ,X(t n ) e B n ) 

I • • • I I P(fn I'n— 1) T n _ i, C?X n )T > (t n _i t n —2i •^n— 2i dx n —f) 

JBi JB n -i JB n 


119 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


... Pfe — ti, Xi, dx 2 )P(ti, x, dx i) (3.64) 

is valid for all x e E (here ¥ x (X(t) e B) = P(t, x, B)); 

(in) For every (3y+) -stopping time T and for every bounded random variable 
Y : Cl —> C the equality 

E* (Y o d T | J T+ ) = E x(r) (F), (3.65) 

holds P x -almost surely on {T < oo} for all x e E; 

(iv) Let ¥> be the Borel field of [0, oo). For every bounded function F : [0, go) x 
Cl —> C, which is measurable with respect to 23®3q and for every (3^+) -stopping 
time T the equality 

E x ({a; > F (T(u), 'dr(^))} I 3 r r+) = {&' ► E X (T(ty)) {w *->■ F (T(u'), a>)}} 

(3.66) 

holds P x -almost surely {T < go} for all x e E. 

Equality (3.66) is called the strong time-dependent Markov property. We shall 
not prove this result. 


This e-book 
is made with 

SetaPDF 


QO 



SETASIGN 




PDF components for PHP developers 


www.setasign.com 


120 




Click on the ad to read more 


Download free eBooks at bookboon.com 





Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.37. Theorem. Let {P(t) : t ^ 0} be a Feller semigroup. There exists a col¬ 
lection of probabilities (P x ) x6£; on ^ e cr-field T generated by the state variables 
{X(t) : t ^ 0} defined on Q := D ([0, oo) , 17 A ) in such a way that 

E x [f (X(h ),..., X(t n ))] = J / (Xfa),X(t n )) dP x 

^ f (^U 1 j • • • > •En)P(fin In— 1 > %n—l> dx r fjP(t n —\ ^n—2) •^n—2i dx n — i) 

... P (^2 ~ ti,Xi,dx2)P(ti,x, dx i), (3.67) 

where f is any bounded complex or non-negative Borel measurable function de¬ 
fined on E A x ... x 17 A ; that vanishes outside of E x ... x E. Let the measure 
spaces (Q,tP,¥ x ) xeE be as in (3.67). The process 

{(fi, ^ P*), (X(t) :t>O),{0 t :t>O),{E,e.)} 

is a strong Markov process. 



The proof of this result is quite technical. The first part follows from a well- 
known theorem of Kolmogorov on projective systems of measures: see Theorems 
1.14, 3.1, 5.81. In the second part we must show that the indicated path space 
has full measure, so that no information is lost. Proofs are omitted. They can 
be found in for example Blumenthal and Getoor [ 20 ], Theorem 9.4. p. 46. 
For a discussion in the context of Polish spaces see, e.g ., Sharpe [ 120 ] or Van 
Casteren [ 146 ]. For the convenience of the reader we include an outline of the 
proof of Theorem 3.37. The following lemma is needed in the proof. 

3.38. Lemma. Let (£4,35, P) be a probability space, and let t i—> Y(t), be a 
supermartingale, which attains positive values. Fix t > 0 and let D be a dense 
countable subset of [ 0, oo). Then 


Y(t)> 0 inf Y(s) = 

0 <s<t, seD 


= 0 . 


(3.68) 


PROOF. Let (s_,-). be an enumeration of the set D n [0,oo). Fix n, N e N, 
and define the stopping time S nj jv by 

S u n = min \ s, : min Y (sA < 2~ n > . 

’ ( IsSjsSV J ’ J 

Then we have Y (S n ^) < 2~ n on the event {S n> N < oo}. In addition, by the 
supermartingale property (for discrete stopping times) we infer 


E 


Y(t), 


min Y (sA <2 n 


\Y(t),S n , N <t] 

= E [Y (■ t ), min (S n , N , t) < t] 

< E [Y (min (S n>N , t)) , min (S n>N , t ) < t] 

= E [Y (S^n) , Sn t N < t] 

V E [2~ n , S n , N < t] ^ 2~ n . (3.69) 


121 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Y(t), inf Y(s)< 2- 

0 <s<t, seD 


y (t), mf Y{a)-0 

0 <s<t, seD 


In (3.69) we let N —*• oo to obtain: 

E 

In (3.70) we let n —*• oo to get: 

E 

Let a > 0 be arbitrary. From (3.71) it follows that 


< 2" 


= 0. 


(3.70) 


(3.71) 


Y(t) > a, inf Y (s) = 0 

< -E 

Y(t ), inf Y(s) = 0 

0<s<t,seD 

a 

0 <s<t,seD 


= 0 . 


Then in (3.72) we let a j 0 to complete the proof of Lemma 3.38. 


(3.72) 

□ 


Let {P(t) : t ^ 0} be a Feller-Dynkin semigroup acting on Cq(E) where FI is a 
locally compact Hausdorff space. In the proof of Theorem 3.37 we will also use 
the resolvent operators {R(a) : a > 0}: R(a)f(x) = ^ e~ at P{t)f{x) dt, a > 0, 
/ e Co (FI). An important property is the resolvent equation: 

R(fi) — R(a) = (a — f3) R(a)R(/3), a, f3 > 0. 

The latter property is a consequence of the semigroup property. 

3.39. Remark. The space E is supposed to be a second countable (i.e. it 
is a topological space with a countable base for its topology) locally compact 
Hausdorff space (in particular it is a Polish space). A second-countable locally- 
compact Hausdorff space is Polish. Let (t7 i ) i be a countable basis of open 
subsets with compact closures, choose for each i e N, yi e U t , together with a 
continuous function f, : E —*■ [0,1] such that f, (y 2 ) = 1 and such that f, (y) = 0 
for y $ Ui. Since a locally compact Hausdorff space is completely regular this 
choice is possible. Put 

OO 

d(x,y ) = \M X ) ~ Mv)\ + 

i=l 

This metric gives the same topology, and it is not too difficult to verify its 
completeness. For this notice that the sequence (f i ) i separates the points of E, 
and therefore the algebraic span (he. the linear span of the finite products of 
the functions /)) is dense in Cq(E ) for the topology of uniform convergence. A 
proof of the fact that a locally compact space is completely regular can be found 
in Willard [ 152 ] Theorem 19.3. The connection with Urysohn’s metrization 
theorem is also explained. A related construction can be found in Garrett [ 57 ]: 
see Dixmier [ 39 ] Appendix V as well. 

3.40. Remark. Next we present the notion of Skorohod space. Let D ([0,1], R) 
be the space of real-valued functions u> defined on the interval [0,1] that are 
right-continuous and have left-hand limits, i.e., u>(t) = u> (£+) = lim s j t a;(s) for 
all 0 < t < 1, and u> (t—) = lim s f t u(s) exists for all 0 < t ^ 1. (In probabilistic 
literature, such a function is also said to be a cadlag function, “cadlag” being an 


E=i 2-7i(*) ZZ12 


x, y e E. 


122 


Download free eBooks at bookboon.com 














Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


acronym for the French “continu a droite, limites a gauche”.) The snpremnm 
norm on D ([0,1], M), given by 

IMloo = sup |w(*)|, we5([0,l],I), 

te[0,l] 


turns the space D ([0,1],M) into a Banach space which is non-separable. This 
non-separability causes well-known problems of measurability in the theory of 
weak convergence of measures on the space. To overcome this inconvenience, 
A.V. Skorohod introduced a metric (and topology) under which the space be¬ 
comes a separable metric space. Although the original metric introduced by 
Skorohod has a drawback in the sense that the metric space obtained is not 
complete, it turned out (see Kolmogorov [ 70 ]) that it is possible to construct 
an equivalent metric (he., giving the same topology) under which the space 
D ([0,1], M) becomes a separable and complete metric space. Such metric space 
the term Polish space is often used. This metric is defined as follows, and 
taken from Paulauskas in [ 110 ]. Let A denote the class of strictly increasing 
continuous mappings of [0,1] onto itself. For A e A, let 


sup 

OsSsCtsSl 


log 


\(t) - A(s) 


t — s 


Then for uq and u) 2 e D ([0,1], E) we define 


d(ui,u 2 ) = inf max(||A||, Hcui - u; 2 ° tU • 


The topology generated by this metric is called the Skorohod topology and 
the complete separable metric space D ([0,1],M) is called the Skorohod space. 
This space is very important in the theory of stochastic processes. The general 
theory of weak convergence of probability measures on metric spaces and, in 
particular, on the space D ([0,1], R) is well developed. This theory was started 
in the fundamental papers like Chentsov [ 33 ], Kolmogorov [ 70 ], Prohorov [ 112 ], 
Skorohod [ 122 ]. A well-known reference on these topics is Billingsley [ 17 ]. 
Generalizations of the Skorohod space are worth mentioning. Instead of real¬ 
valued functions on [0,1] it is possible to consider functions defined on [0, go) and 
taking values in a metric space E. The space of cadlag functions obtained in this 
way is denoted by D ([0, go), E ) and if E is a Polish space, then D ([0, oo ),E), 
with the appropriate topology, is also a Polish space, see Ethier and Kurtz [ 54 ] 
and Pollard [ 111 ], where these spaces are treated systematically. 


Outline of a proof of Theorem 3.37. Firstly, the Riesz representa¬ 
tion theorem, applied to the functionals / >—► P(t)f(x ), / e Cq(E), (t,x) e 
[0, oo) x E, provides a family of sub-probability measures B > P (t, x, B). 
B 6 £, (t, x) e [0, oo) x E, with P(0,x,B) = S X (B) = 1 n(x). From the semi¬ 
group property, i.e. P(s + t) = P(s)P(t), s, t ^ 0, it follows that the family 
{P (t, x, ■) : (t, x ) 6 [0, oo)} obeys the Chapman-Kolmogorov identity: 

P (s + t,x, B) = I P (t, I/, B) P (s, x, dy ), Bef, s>0, t ^ 0, x e E. (3.73) 


123 


Download free eBooks at bookboon.com 





Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


The measures B > P(t,x,B), B e £, are inner and outer regular in the sense 
that, for all Borel subsets B (he. Bef), 

P (t, x, B ) = sup {Pit, x,K) : K c= B, K compact} 

= inf {P (t, x, O) : O => B, O open} . (3.74) 

In general we have 0 < P(t,x,B ) < 1, ( t,x,B ) e [0, go) x E x £. In order 
to apply Kolmogorov’s extension theorem we need that, for every x e E, the 
function t >—► P(t, x, E) is constant. Since P( 0, x, E) = 1 this constant must be 
1. This can be achieved by adding an absorption point A to E. So instead of 
E we consider the state space E A = E u {A}, which, topologically speaking, 
can be considered as the one-point compactification of E , if E is not compact. 
If E is compact, A is an isolated point of E A . Let £ A be the Borel field of E A . 
Then the new family of probability measures {N (t, x, •) : (t, x) e [0, oo) x E } is 
defined as follows: 

N (t, x, B) = P (t, x, B n E) + (1 — P (t, x, E)) 1 B (A), (t, x) e [0, oo) x E , 

N (t, A, B) = 1 B (A), t>0, Be£ A . (3.75) 



www.sylvania.com 


We do not reinvent 
the wheel we reinvent 
light. 


Fascinating lighting offers an infinite spectrum of 
possibilities: Innovative technologies and new 
markets provide both opportunities and challenges. 
An environment in which your expertise is in high 
demand. Enjoy the supportive working atmosphere 
within our global group and benefit from international 
career paths. Implement sustainable ideas in close 
cooperation with other specialists and contribute to 
influencing our future. Come and join us in reinventing 
light every day. 


OSRAM 

SYLVAN!A 


Light is OSRAM 


124 

Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Compare this construction with the one in (1.1). The family 
N := {N (t, x, •) : (t, x) e [0, go) x E } 

again satisfies the Chapman-Kolmogorov identity with state space E A instead of 
E. Notice that N (t, x, B) = P (t, x, B ) whenever (t, x, B ) belongs to [0, oo )xEx 
£. Employing the family N we define a family of probability spaces as follows. 
For every x 0 e E, and every increasing n-tuple 0 < t\ < ■ ■ ■ < t n < go in [0, ) n 
we consider the probability space ((.E A ) , ® n £ A , P Xo ,ti,...,t n ) : the probability 
measure P Xo ,t 1 ,...,t n i s defined by 

P x 0 ,ti,...,t n (-B) = J*. .. fiV(ti -to,x 0 ,dxi) ■ ■ ■ N (t n -t n -i,x n -i,dx n ) , (3.76) 

B 

where B e ® n £ A . By an appeal to the Chapman-Kolmogorov identity it follows 
that, for xo e E fixed, the family of probability spaces: 

{ ((E A ) n , ® n £ A , Px 0 M,..,tn) : o < h < • • • < t n < 00, n e N} (3.77) 

is a projective system of probability spaces. Put Cl A = (E A ) f 0 ’ 00 ^ and equip this 
space with the product rr-field U A := ®1°’ 00 )£ A . In addition, write X(t)(u) = 
uj(t), i9 t cj(s) = cj(s +1), s, t ^ 0, <jJ e Cl A . The variables X(t), t ^ 0, are called 
the state variables, and the mappings t ^ 0, are called the (time) translation 
or shift operators. By Kolmogorov’s extension theorem there exists, for every 
x e E A , a probability measure P„, on the u-field T A such that 


P* 


*(<!),■•• ,X(t n ) )eB 


= P, 


X,t\ ,...,^77, 


[B]- 


(3.78) 


In (3.78) B belongs to ® n £ A , and 0 < t\ < • • • < t n < oc is an arbitrary increas¬ 
ing n-tuple in [0, oo). Another appeal to the Chapman-Kolmogorov identity and 
the monotone class theorem shows that the quadruple 

{(d a ,T a ,P x ), (x(t),t ^ o) , ($t, t ^ 0), (E A , £ a ) | (3.79) 


is a Markov process relative to the internal history, i. e. re 
T A = a fx(s) : 0 ^ s ^ t\ t ^ 0. Moreover, we have P x 


by the Markov property, we also have, for x e E A , t > s ^ 0, 


ative to the filtration 


X(0) = x 


= 1, and, 


P, 


X(t) = A, X(s) = A 


(Markov property) 


E, 


= E, 


= E, 


P a 


| ~X(t - s) otf s 


A T 


A 


,X(s) = A 


[X(t-s) = A],X(s) = A 


P/ 


X(t 


A 


,X(s) = A 


= N (t — s, A, {A}) • N (s, x, {A}) 


N (s, x, {A}) = P a 




A 


(3.80) 


The equality in (3.80) says that once the process t i—> X(t) enters A it stays 
there. In other words A is an absorption point for the process X. Define, for t > 


125 


Download free eBooks at bookboon.com 


















Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


0, the mapping P(t) : C (E A ) - C (. E A ) by P(t)f(x) = E x [f (X(f)JJ, / e 

C ( E A ). From the Markov property of the process X, and since the semigroup 

1 > P(t) is a Feller-Dynkin semigroup, it follows that the mappings P(t), t > 0, 
constitute a Feller (or Feller-Dynkin semigroup) on C ( E A ). Consequently, for 
any f e C ( E A ), and any to P 0, we have 


lim sup E x 

s,t^>t 0 ,s,t^o xeE A 


f A'(() - / (X( s 


= 0. 


(3.81) 


Let D be the collection of non-negative dyadic rational numbers. Since the 
space E a is compact-metrizable, it follows from (3.81) that, for all x e E A , the 
following limits 


lim X(s), and lim X(s), 

sft, seD s[t,seD 


exist in E A P^-almost surely. Define the mapping tt : fl A —> fi luy 
ir(u)(t) = lim X(s) (a;) =: X(t) (7r(w)), t ^ 0, a;eH A . 

sit,seD 


(3.82) 


(3.83) 


Then we have that, for every x e E A fixed, the processes t X{t) o it and t 
X(t) are P x -indistinguishable in the sense that there exists an event D A,/ c 0 A 


such that P., 


p A ’' 


= 1 and such that for all t e [0,c») the equality X(t) = 

X(t)oTT holds on the event Q Aj . This assertion is a consequence of the following 
argument. For every (t,x) e [0. x) x E A and for every / e C (E A j we see 


E* 


f(X(t)0 7T)-f(X(t 


lim E t 


sp, seD 


f X(s))-f X(t 


= e t 


f X(t))-f X(t 


= 0. 


(3.84) 


Since the space E A is second countable, the space C ( E A ) is separable, the 
equalities in (3.82) and (3.84) imply that, up to an event which is P x -negligible 
X(t) = X(t) o tt for all t ^ 0. See Definition 5.88 as well. In addition, we have 
that, for u) e D the realization t h-»- ir(uj)(t) belongs to the Skorohod space D, i.e. 
it is continuous from the right and possesses left limits. We still need to show 

0 < s < t, s 6 d\ 


{V(s) : 


that X(t) e E implies that the closure of the orbit 

is a closed, and so, compact subset of E. For this purpose we choose a strictly 
positive function /e Co (E) which we extend to a function, again called /, such 
that / (A) = 0. It is convenient to employ the resolvent operators R(a), a > 0, 

here. We will prove that, for a > 0 fixed, the process t > e~ at R (a) f (x(t 

is P x -supermartingale relative to the filtration 
t\ ^ 0. Then we write: 




t ^ OF Therefore, let to > 


E 
= E. 


[e-^R(a)f (X («) 


T, 


A 


0 —at2 


f 


S E 


X(t 2 ) 


f(X(s 


ds Tr 


126 


Download free eBooks at bookboon.com 



















Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


(Markov property) 


= E„ 


f 


e~ at2 I e 




f [X (s +i 2 




t2 


ds I 3 


ti 


(Fubini’s theorem and tower property of condiitonal expectation) 


E„ 


= E„ 


— at 2 


J 


J e as f (x (s + f 2 )) ds | LF; 


e- as f(X(s ) ds I ^ 


(the function / is non-negative and £2 > h) 


< E t 


= e t 


fe-/ (X(s)) ds|§) 

-Jti 


—at 1 


fe~ as f (- X(s + t 1)) ds| 


(Fubini’s theorem in combinaton with the Markov property) 


r*oo 

= e e““ s E 

Jo 


—at 1 


'X(ti) 


/ (if (s))] ds = e- a, 'R(a)f(x («,)). (3.85) 



360 ° 

thinking 


Deloitte 




Discover the truth at www.deloitte.ca/careers 


© Deloitte & Touche LLP and affiliated entities. 


127 

Download free eBooks at bookboon.com 


















Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Put Y(t) = e at R(a)f (t)^, and fix x e E. From (3.85) we see that the 

process t > Y(t) is a P x -supermartingale relative to the filtration ) 

~ ^ V / t>o 

From Lemma 3.38 with Y ( t ) instead of Y ( t ) and Ph in place of P we infer 


X(t) e E, inf Y(s) = 0 

seDn(0,t) 


= Pr 


Y(t) > 0. inf y( s ) = 0 

seDn(0,t) 


= o. 


(3.86) 

From (3.86) we see that, P x -almost surely, X(t) e E implies inf s6l ) n (o > t) Y (s) > 0. 
Consequently, for every x e E, the equality 


X(t) 6 E = P x X(t) 6 E, closure jx(s) : s e D n (0, t)\ a E , (3.87) 


holds. In other words: the closure of the orbit jx(s) : s e D n (0, £)j is con¬ 
tained in E whenever X (t) belongs to E. We are almost at the end of the proof. 
We still have to carry over the Markov process in (3.79) to a process of the form 


m Px) , > 0), (& t , t> 0), (E, £)} (3.88) 

with fl = D ([0, oo), P A ) the Skorohod space of paths with values in E A . This 

can be done as follows. Define the state variables X(t) : D —> E A by X(t)(u)) = 

u)(t), uj e D, and let d t : Q —*■ Q be defined as above, i.e. tf t (uj)(s) = cj(s + 1), 

u e Cl. Let the mapping n : Cl —> Cl be defined as in (3.83). Then, as shown 
above, for every x e E, the processes X(t) and X(t) on are ^-indistinguishable. 
The probability measures P x , x 6 E, are defined by P x [A] = P x \k e A] where A 
is a Borel subset of Cl. Then all ingredients of (3.88) are defined. It is clear that 
the quadruple in (3.88) is a Markov process. Since the paths, or realizations, 
are right-continuous, it represents a strong Markov process. 

This completes an outline of the proof of Theorem 3.37. □ 


As above L is a linear operator with domain D(L ) and range R(L) in Cq{E). 
Suppose that the domain D(L) of L is dense in Cq(E). The problem we want to 
address is the following. Give necessary and sufficient conditions on the operator 
L in order that for every x 6 E there exists a unique probability measure P x . on 
T with the following properties: 

(i) For every / e D(L) the process f(X(t)) — f(X( 0)) — ^ Lf(X(s))ds, 
t ^ 0, is a P x -martingale; 

(ii) P*(X(0) = x) = l. 

Here we suppose Cl = D ([0, go) , E A ) (Skorohod space) and T is the a-field gen¬ 
erated by the state variables X(t), t ^ 0. Let P(Cl) be the set of all probability 
measures on T and define the subset P'(Cl ) of P(Cl) by 

P'(SJ) = U P 6 P(Cl) : P[X(0) = x] = 1 and for every / e D(L ) the process 


128 


Download free eBooks at bookboon.com 










Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


f(X(t)) — f(X( 0)) — f Lf(X(s))ds, t ^ 0, is a P-martingale 

Jo 

(3.89) 

Let (vj : j e N) be a sequence of continuous functions defined on E with the 
following properties: 


(i) v 0 = 1; 

(ii) b-L ^ 1 and Vj belongs to D(L) for j ^ 1; 

(iii) The linear span of Vj , j ^ 0, is dense in C(E A ). 


In addition let (/& : k e N) be a sequence in D(L) such that the linear span 
of {( fk , Lfk) : k e N} is dense in the graph G(L ) := {(/, Lf ) : / e D(L)} of the 
operator L. Moreover let {s 3 : j e N) be an enumeration of the set Q n [0, oo). 
The subset P'(O-) may be described as follows: 


00 00 00 


p, (! 2 ) - n n n n 


n 


(3.90) 


n=lk=lm=l 0 ^ <...<s jjn+1 


P e P(fl) : inf max 

xeE l^j^n 


J (h(X(s im „)) ~ LMX{s))ds\ n” (V(sj,))dP 

=j - p LMx^d^j nr,, (vtsjj)®}. 


It follows that P'(fi) is a weakly closed subset of P(Q). In fact we shall prove 
that, if for the operator L, the martingale problem is uniquely solvable, then 
the set P'(fi) is compact metrizable for the metric rf(Pi,P 2 ) given by 


d(p 1 ,p 2 ) = 2 


AcN,|A|<oo 


2-W 



(X( S ,,))d(P 2 



(3.91) 

The following result should be compared to the comments in 6.7.4. of [133]. It 
is noticed that in Proposition 3.41 below the uniqueness of the solutions to the 
martingale problem is not used. 


3.41. Proposition. The set P'(fl) supplied with the metric d defined in (3.91) 
is a compact Hausdorff space. 


Proof. Let (P n : n e N) be any sequence in P'(Q). Let (P n<! : £ e N) be 
a subsequence with the property that for every me N, for every m-tuple 
(j i,... ,jm) in and for every m-tuple (,Sj l ,..., Sj m ) e Q m the limit 

J Cl Vjk ( X dFne 


129 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


exists. We shall prove that for every me N, for every m-tuple (j i,..., j m ) in 
N m and for every m-tuple (tj 1 ,..., tj m ) e [0, oo) m the limit 

J nr=i Uh ( x dFng ( 3 - 92 ) 

exists for all sequences (v,j : j e N) in Cq(E). But then there exists, by Kol¬ 
mogorov’s extension theorem, a probability measure P such that 


lim 

r— >oo 



u 


'3k 


{t jk )) d¥ ni 




(3.93) 


for all m e N, for all (ji,..., j m ) e and for all (tj 1 ,..., t Jm ) e [0, co) m . From 
the description (3.89) of P'{ fl) it then readily follows that P is a member of 
P'(Q). So the existence of the limit in (3.92) remains to be verified together 
with the fact that D([0,co) , E A ) has full P-measure. Let t be in Q. Since, 
for every j e N, the process Vj(X(s)) — vj(X(0)) — ^ Lvj(X(a))da, s ^ 0, is a 
martingale for the measures P n£ , we infer 

J J Lvj(X(s))dsdP ni = J Vj(X(t))dP ne - J Vj{X{ti))dF nr 
and hence the limit lim^oo $ Jq Lvj(X (s))dsdF ni exists. 


SIMPLY CLEVER 


SKODA 



We will turn your CV into 
an opportunity of a lifetime 


Do you like cars? Would you like to be a part of a successful brand? 
We will appreciate and reward both your enthusiasm and talent. 
Send us your CV. You will be surprised where it can take you. 


130 


Send us your CV on 

www.employerforlife.com 



Download free eBooks at bookboon.com 











Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Next let t 0 be in [0, go). Again using the martingale property we see 
V j (*(*>)) d (Png — Pn fc ) 

- J ([ Lv, (X{ S )) els'] d (P„, - P„J + J v,(X(0))d (P„, - P„J 
- J ^Lv J (X( a ))ds'jd(P n ,-P, t ), 

where t is any number in Q n [0, go). From (3.94) we infer 

Ju i (X(i 0 ))d(P„, -P„J 

J Q%(X( S ))d«)d(P B< -P B J + j Vj (X(0))d(F ne -F nk ) 


(3.94) 


+ 2 \t — fo| ||Lvj ||op • 

If we let i and k tend to infinity, we obtain 


lim sup 

>00 


jA (X(t 0 )) d (P re£ -P„J 


^ 2 \t — t 0 1 I Lv 


J II 00 ' 


(3.95) 

(3.96) 


Consequently for every s ^ 0 the limit liin^ rx , §Vj (X(,s)) dF ne exists. The 
inequality 



il 


Lvj (X(s)) 


dsdF ne 


< l*“*o| \\LVj loo 

shows that the functions t •—» lirrp^r X , J v,j (X(t)) dP ne , j e N, are continuous. 
Since the linear span of (vj : j E 1) is dense in Cq(E), it follows that for 
v e C()(E) and for every t E 0 the limit 


lim 

.£—>00 


jv(X(t))dF ne 


(3.97) 


exists and that this limit, as a function of t, is continuous. The following step 
consists in proving that for every to e [0, oo) the equality 


lim lim sup j | vj (X(t)) — Vj (X(t 0 ))| dF ne = 0 (3.98) 

t_> *o r—>oo J 


holds. For t > s the following (in-)equalities are valid: 

(J lv, (X(t)) - V, (X(»))| dP„,) ' J \Vj (X(t)) - V, (X( S ))| 2 dP n , 

-f M xm’dP v -f M xm^ 

- 2 Re J (vj(X (()) - v,(X(s)))v,(X(s))dP ni 


131 


Download free eBooks at bookboon.com 














Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


— 2Re J Q LvjiXiafidaJvjiXWdPnt 

< J \vj(X(t))\ 2 dF ne - J \vj(X(s))\ 2 dP n£ + 2 (t - s ) \\L Vj ^ . (3.99) 

Hence (3.97) together with (3.99) implies (3.93). By (3.93), we may apply 
Kolmogorov’s theorem to prove that there exists a probability measure P on 
Cl' := (E A )[° ,co ) with the property that 


r m 

In 

J k =1 


v jk( x ( s j k )W = } im 


r m 

In 

J k =1 


■ nt ■> 


(3.100) 


holds for all m e N and for all (sj 1 ,..., s 3m ) e [0, co) m . It then also follows that 
the equality in (3.100) is also valid for all m-tuples /i,..., f m in C(E A ) instead 
of v ]l ,..., Vj m . This is true because the linear span of the sequence (v 3 : j e N) is 
dense in C(E A ). In addition we conclude that the processes f(X(t)) — f(X( 0)) — 
Lf(X(s))ds, t ^ 0, / 6 D(L ) are P-martingales. We still have to show 
that T>([0, go) , E a ) has P-measure 1. From (3.98) it essentially follows that 
set of w £ ( E a ) 1° :X ) for which the left and right hand limits exist in E A has 
’’full” P-measure. First let / ^ 0 be in Co(E). Then the process [ G\f ] ( t ) : = 
Eff e- x °f(X{a))da\3 t ) is a P-supermartingale with respect to the hltration 
{3y : t ^ 0}. It follows that the limits linpp 0 [G\f] ( t ) and lim t p 0 [G\f] ( t ) both 
exist P-almost surely for all to ^ 0 and for all / e 6' 0 (E). In particular these 
limits exist P-almost surely for all f e D(L ). By the martingale property it 
follows that, for fe D(L), 


\f(X(t))-Xe M [G\f] (t)| = 


Ae At E( I e- x °(f(X(a))-f(X(t)))da\? t 


Ae At E 


(I 
(f •-“(!' 


Lf(X(s))ds Ida \ % 


pOO 

< AgAt \ t e ~ Xa ( a - Halloo da = A_1 \\ L f\L ■ 


Consequently, we may conclude that, for all s, t ^ 0, 


I/(X(t)) - /(V( S ))| « 2A- 1 ||t/L + \\e xt [G\f] (() - \e Xs [G,f] ( s )| 

and hence that the limits lim^ s f(X(t)) and limq s f(X(t)) exist P-almost surely 
for all / e D(L). By separability and density of D(L) it follows that the limits 
lim^ s X(f) and lim ( f s X(t) exist P-almost surely for all s ^ 0. Put Z(s)(uj) = 
lim 4 Sjt6 QX(t)(c(j), t 5= 0. Then, for P-almost all uj and for all s ^ 0, Z(s)(u>) is 
well-defined, possesses left limits and is right continuous. In addition we have 


E(/(ZM)9M) -E(f(X(s+))g(X(s))) -= limE (f(X(t))g(X („))) 

t[s 

= E tf{X{s))g{X(s ))), for all f,ge C 0 (E ) 


and for all s ^ 0: see (3.98). But then we may conclude that X(s) = Z(s) P- 
almost surely for all s ^ 0. Hence we may replace X with Z and consequently 


132 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


(see the arguments in the proof of Theorem 9.4. of Blumenthal and Getoor 
[[20], p. 49]) 

P (uj e O ': oj is right continuous and has left limits in £ ,A ) = 1. 

Fix s > t. We are going to show that the set of paths c o e (.EA)^’ 00 ) for which 
u>(s) = X(s)(ui) belongs to E and for which, cn(t— ) = lim r f t X(t)(u) - A 
possesses P-measure 0. It suffices to prove that, for / e Co(E), 1 ^ f(x ) > 0 
for all x e E fixed, the following integral equalities hold: 

E [/(A»), f(X(t-)) = 0] = J/(X( S ))l l/ _ 0 ,(X(t-)) dP = 0. 




MAERSK 


I joined MITAS because 
I wanted real responsibility 


The Graduate Programme 
for Engineers and Geoscientists 

www.discovermitas.com 


Real work 
International opportunities 
Three work placements 




a 


I was a construction 
supervisor in 
the North Sea 
advising and 
helping foremen 
solve problems 


133 



Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


This can be achieved as follows. From the (in-)equalities 
E(f(X(s)),f(X(t))=0) 

= lim E (f(X( S )) (l - (f(X(t))) 1/n )) 

= limE(/(A'(s)) (1 -/(X(t)) 1 '")) 

"l/n i 


= lim E 

n—> oo 


rl/n 

/(A(s))J f(X(t)Y 


log 


= lim 

n—> oo 


= lim 

n —»go 


J " E f/(X(s))f(X(t)r log da 


Tom) 

J " (E - E„,) (f(X( S ))f(X(t)r log da 

25 , { 7 E "' (f(X(s))f(X(t)T log yplyyy) da 

£K 

f(x(s))nx(t)r log 


+ lim 

n 


< 


nm). 

+ E„,(/(A(s)),/(X(f)) = 0), 
we conclude that 

E (f(X(s)),f(X(t)) = 0) = lim E (f{X(s)) (l - /(X(f)) 1 /")) 




j;K/(A W )/ Wt )rio g7 ^) 

-E ni (nX(s))f{X(t)yi 0t j^rr'\ da. 

Since the function x >—> f(x) a log belongs to Cq{E) for every a > 0, we 

/ \ x ) 

obtain upon letting £ tend to oo, that E (f(X(s)), f(X(t)) = 0) = 0, where 
s > t. To see this apply Scheffe’s theorem (see e.g. Bauer [[ 10 ], Corollary 

2.12.5. p. 105]) to the sequence a >-+ E n/; (^f (X (s)) f (X (t)) a log 

From description (3.90), it then follows that P belongs to P'(Q): it is also clear 
that the limits in (3.92) exist. □ 

3.42. Proposition. Suppose that for every x e E the martingale problem is 
uniquely solvable. Define the map F : P'(0) —» E A by F( P) = x, where P 6 
P'(D) is such that P(X(0) = x) = 1. Also notice that F(Pa) = A. Then 
F is a homeomorphism from P'ifit) onto E A . In fact it follows that for every 
u e Co(E) and for every s ^ 0 , the function x >—> Ea;(w(X(s)) belongs to Cq{E). 

Proof. Since the martingale problem is uniquely solvable for every x e E 
the map F is a one-to-one map from the compact metric Hausdorff space P'Xl) 


134 


Download free eBooks at bookboon.com 
















Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


onto E a (see Proposition 3.41). Let for x e E the probability F x be the unique 
solution to the martingale problem: 

(i) For every / e D(L) the process f(X(t)) - f(X( 0)) - g Lf(X(s))ds, 
t ^ 0, is a P x -martingale; 

(ii) P*(X(0) = x) = 1. 

Then, by definition F(F X ) = x, x e E, and F(Pa) = A. Moreover, since 
for every x e E the martingale problem is uniquely solvable we see P'( f2) = 
{P x : x e E a }. Let (xg : £ e N) be a sequence in E A with the property that 
lim^oo d (P X£ , P x ) = 0 for some x e E A . Then lim^oo \ v j (%i) ~ v j { x )| = 0, for 
all j e N, where, as above, the span of the sequence (vj : j e N) is dense in 
C (E A ). It follows that liiii^x, xg = x in E A . Consequently the mapping F 
is continuous. Since F is a continuous bijective map from one compact metric 
Hausdorff space P'(Q) onto another such space E A , its inverse is continuous as 
well. Among others this implies that, for every s e Q n [0, go) and for every 
j ^ 1, the function x > §vj (X(s)) dP x belongs to C$(E). Since the linear span 
of the sequence (vj : j A 1) is dense in Cq(E) it also follows that for every 
v e Cq(E), the function x i—> (X(s)) dP x belongs to Cq(E). Next let s 0 > 0 
be arbitrary. For every j ^ 1 and every s e Q n [0, go), s > sq, we have by the 
martingale property: 

sup |E X (vj(X(s))) - E x (^(A(s 0 )))| = sup f E x (Lvj (X(a))) da 

XEE XEE Jsq 

^ (s-So) Halloo . (3.101) 

Consequently, for every s 6 [0,oo), the function x h-> E x (vj (X(s))), j A 1, 
belongs to Cq(E). It follows that, for every v e Cq(E) and every s ^ 0, the 
function x >—► E x (n(X(s))) belongs to Co(E). This proves Proposition 3.42. □ 

The proof of the following proposition may be copied from Ikeda and Watanabe 
[61], Theorem 5.1. p. 205. 

3.43. Proposition. Suppose that for every x e E A the martingale problem: 

(i) For every f e D(L) the process f(X(t)) - f(X( 0)) - J ]Lf(X(s))ds, 
t ^ 0, is a P -martingale; 

(ii) P(X(0) = x) = l, 

has a unique solution P = P x . Then the process 

m Px), (X(t) :t>0),(fl t :t>0),(F,£)} 
is a strong Markov process. 

PROOF. Fix xeE and let T be a stopping time and choose a realization of 
A [1 a ° $t | Tt] , A e 3 r . 

Fix any oj e O for which 

A >—> Q y (A) := E x [1^ o d T | $t\ ( w ), 


135 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


is defined for all A e £F. Here, by definition, y = Notice that, sice the 

space E is a topological Hausdorff space that satisfies the second countability 
axiom, this construction can be performed for P^-almost all u. Let / be in D(L) 
and fix t 2 > ti ^ 0. Moreover fix C e 3 tl . Then ^ 1 ((7) is a member of 3y 1+ x- 
Put M,(t ) = f(X(t)) - f(X( 0)) - f 0 Lf(X(s))ds, 0. We have 

E„ lo) = E„ (M,(h )\ c ). (3.102) 


We also have 


J (f(X{t,)) - f(X( 0)) - J) Lf(X(s))ds\ 1 c dQ, 

f (X(t 2 + T))~ f(X(T)) - Pl/ (X(s + T)) 

Jo 


= E. 

= E. 

= E 


rt2+T \ 

f (X(t 2 + T)) - f(X(T )) - Lf (X(s)) ds) (l c o i9 T ) 

rt2+T 

f (X(t 2 + T)) - /(X(T)) - Lf (X(s)) ds 

u). 


E, 


(3.103) 

ds ) 1(7 o dx | 3^t (w) 

(w) 

%1+T 


1 c ° | 3"T 




Because achieving your dreams is your greatest challenge. IE Business School’s Master in Management taught in English, 
Spanish or bilingually, trains young high performance professionals at the beginning of their career through an innovative 
and stimulating program that will help them reach their full potential. 

Choose your area of specialization. 

Customize your master through the different options offered. 

Global Immersion Weeks in locations such as London, Silicon Valley or Shanghai. 

Because you change , we change with you . 


www.ie.edu/master-management mim.admissions@ie.edu f # In YnTube ii 


Master in Management • 


136 



Download free eBooks at bookboon.com 



















Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


By Doob’s optional sampling theorem, the process 

rt-\-T 

f {X{t + T)) - f(X(T )) - Lf {X{s)) ds 

is a P ; ,;-inartingale with respect to the fields %+t, t P 0. So from (3.103) we 
obtain: 


J (V(X(t 2 )) - f(X( 0 )) - £ 2 Lf(X(s))ds ) 1 cdQy 
= E* (j (X(n + T)) - f(X(T )) - £ 1+T Lf (X(s)) , 
= J (jiXfa)) - f(X( 0 )) - £' Lf(X(s))ds^j 1 cdQy. 


ds 1 1(7 o dj* | 


u 


(3.104) 


It follows that, for / 6 D(L ), the process Mf(t ) is a P y - as well as a 
martingale. Since P J/ [X(0) = y\ = 1 and since 


Qy(*(0) = 2/) = ^ [l{X(0)=2/} ° \ $t\ (X>) 

= Ex [l{X(T)=y} | 3" T ] (u) = 1 {X(T)=y}(u) = 1, (3.105) 


we conclude that the probabilities P :y and Q y are the same. Equality (3.105) 
follows, because, by definition, y = X(T)(u) = uj(T(oj)). Since F y = Q y , it then 
follows that 

lPx(T)(a;)(^4) = E x [1 A ° | 3V] (^), A G T. 

Or putting it differently: 


Ex(t)(^4) = Eh [1^ o d T \ ff T ], A e T. (3.106) 

However, this is exactly the strong Markov property and completes the proof of 
Proposition 3.43. □ 


The following proposition can be proved in the same manner as Theorem 5.1 
Corollary in Ikeda and Watanabe [61], p. 206. 

3.44. Proposition. If an operator L generates a Feller semigroup, then the 
martingale problem is uniquely solvable for L. 


Proof. Let {P(t) : t P 0} be the Feller semigroup generated by L and let 


m P*), (X(t) :t>0),(# t :t>0),(E,e.)} 


be the associated strong Markov process (see Theorem 3.37). If / belongs to 
D(L), then the process M f (t) := f(X(t)) - f(X( 0)) - J \ Lf(X(s))ds , t> 0, is 
a P ;) ,-martingale for all x e E. This can be seen as follows. Fix f 2 > T P 0. 
Then 


E x [M f (t 2 ) | T tl ] 
= Mfiti) + Ea; j 

= Mfitf) + Ea; 


f(x(h)) -1 Lf(x(s))dfj I s-,,) - S(X(U)) 
f(X(h - h + «) - F " Lf(X(s + U))ds 



137 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


- f(X(t i)) 


(Markov property) 


Mfih) + E x(tl) - U)) - £ 2 h Lf(X(s))ds^j 


Next we compute, for y e E and s > 0, the quantity: 


f(X(t 0). (3.107) 


E, (j(X(s)) - f Lf(X(cr))d(j S J - f(y ) 

= [P(s)f] (y) - f [P(a)(Lf)] ( y)da - f(y) 

Jo 

= [i=(s)/] (y) - £ Ya [p{(,)s] {v)ch - Hv) 

= [i=(s)/]] (y) - (Vw/1 (v) - [^(0)/] (!/)) - m = 0. (3.108) 

Hence from (3.107) and (3.108) it follows that the process t ^ 0, is 

a P^-martingale. Next we shall prove the uniqueness of the solutions of the 
martingale problem associated to the operator L. Let P). and P'). be solutions 
’’starting” in x e E. We have to show that these probabilities coincide. Let / 
belong to D(L) and let T be a stopping time. Then, via partial integration, we 
infer 


A J” e“ At | f(X(t + T)) - £ +T Lf(X(r))dr - f(X(T)) j dt + f(X(T)) 


A J°° e" At | f(X(t + T)) - £ +T Lf(X(r))dT | dt 

rGO rGO rt 

A e~ M f(X(t + T))dt - A e“ At Lf(X(r + T))drdt 

Jo Jo Jo 


rGO rGO / r*< 

= a| e~ xt f(X(t + T))dt- A I (J 

rGO 

= e~ xt [(A/ — L)f] (X(t + T))dt. 

Jo 


e~ xt dt ) Lf(X(r + T))dr 


(3.109) 


From Doob’s optional sampling theorem together with (3.109) we obtain: 


rGO 

<=“X ((A/ - L)f(X(t + T)) I 3V) dt - f(X(T)) 

Jo 

= A f° e~ x V, | ff(X(t + T)) - J Lf (X(r))d T - f(X(T))) \J T \dt 
= 0 

= A f° e- A, E 1 1 (f(X(t + T)) - £ Lf(X( T ))dT - f(X(T))\ \J T \dt 

rGO 

= e~ xt El ((A/ — L)f{X(t + T)) \ 3?) dt — f(X(T)). (3.110) 

Jo 


138 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Next we set 

rco 

[R(\)f] (x) = e~ Xt [P(t)f] ( x)dt , xe E, A > 0, f e C 0 (E). 

Jo 

Then 


(3.111) 


(XI - L)R(X)f = /, / s C„(E), R(X)(XI -L)f = f,f e D(L). (3.112) 

Among other things we see that R(XI — L) = Co(E), A > 0. From (3.110) it 
then follows that, for g e Co(E), 


r oo r oo 

e- x X(g(X(t + T))\ J T )dt=\ e~ Xt [P{t)g](X(T))dt 

Jo Jo 


f 


e~ xt E % ( g(X(t + T)) \ J T ) dt. (3.113) 


Since Laplace transforms are unique, since g belongs to C'o(E) and since paths 
are right continuous, we conclude 


Ei (g(X(t + T)) | S T ) - [P(i) s ](X(T)) - E* (g(X(t + T)) | ? T ), (3.114) 


whenever g belongs to Cq(E), whenever t ^ 0 and whenever T is a stopping 
time. The first equality in (3.114) holds P^-almost surely and the second P^- 
almost surely. As in Theorem 3.36 it then follows that 

k (rC-i m x m~>) = e " (nj-i m x m>) (3.H5) 

for n = 1 , 2 ,... and for / 1? ..., f n in Cq(E). But then the probabilities P* and 
P^ are the same. This proves Proposition 3.44. □ 



139 


Click on the ad to read more 


Download free eBooks at bookboon.com 







Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


The theorem we want to prove reads as follows. 

3.45. Theorem. Let L be a linear operator with domain D(L ) and range R(L) 
in Cq(E). Let Cl be the path space Cl = D ([0, go) , £’ A ). The following assertions 
are equivalent: 

(i) The operator L is closable and its closure generates a Feller semigroup; 

(ii) The operator L solves the martingale problem maximally and its domain 
D(L) is dense in Co(E); 

(iii) The operator L verifies the maximum principle, its domain D{L ) is 
dense in Cq(E) and there exists Ao > 0 such that the range R (Ao I — L ) 
is dense in Cq{E). 

3.46. Remark. The hard part in (iii) is usually the range property: there exists 
A 0 > 0 such that the range i?(A 0 / — L ) is dense in Cq(E). The theorem also 
shows, in conjunction with the results on Feller semigroups and Markov pro¬ 
cesses, the relations which exist between the unique solvability of the martingale 
problem, the strong Markov property and densely defined operators verifying 
the maximum principle together with the range property. However if L is in fact 
a second order differential operator, then we want to read of the range property 
from the coefficients. There do exist results in this direction. The interested 
reader is referred to the literature: Stroock and Varadhan [133] and also Ikeda 
and Watanabe [61]. 

In what follows we shall assume that the equivalence of (i) and (iii) already has 
been established. A proof can be found in [141], Theorem 2.2., p.14. In the 
proof of (ii) => (i) we shall use this result. We shall also show the implication 

(i) => (ii)- 

PROOF, (ii) => (i). Let, for x 6 E, the probability P,„ be the unique solution 
of the martingale problem associated to the operator L. From Proposition 3.43 
it follows that the process {(12, T, P x ), (X(t) : t ^ 0), (i9 t : t ^ 0), (E , £)} is a 
strong Markov process. Define the operators {P(t) : t ^ 0} as follows: 

[P(t)f](x)=E x (f(X(t))), feCo(E), t> 0 . (3.116) 

We also define the operators {R( A) : A > 0} as follows: 

rCO 

[i?(A)/](x) = e~ M [P(t)f](x)dt, feC 0 (E), A > 0. (3.117) 

do 

From Proposition 3.42 it follows that the operators P(t) leave C${E ) invariant 
and hence we also have R(X)Cq(E) cr C () {E). From the Markov property it fol¬ 
lows that {P(t) : t A 0} is a Feller semigroup and that the family {R( A) : A > 0} 
is a resolvent family in the sense that 

P(s + t) = P(s) o P(t), s , t ^ 0, (3.118) 

R(A 2 ) — i?(Ai) = (Ai — A 2 ) -R(Ai) o i?(A 2 ), Ai, A 2 > 0. (3.119) 

For A > 0 fixed the operator L is defined in G'o( E) as follows: 

L:R(\)f~\R(\)f-f, feC 0 (E). (3.120) 


140 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Here the domain D(L ) is given by D(L ) = {R(X)f : f e Cq(E)}. The operator 
L is well-defined. For, if fi and / 2 in G,( E) are such that R(X)f\ = R(X)f 2 , then 
by the resolvent property (3.119) we see /ri?(/i)/i = /ii?(/i)/ 2 , /r > 0. Let /i tend 
oo, to obtain /i = / 2 . Since the operator R(X) is continuous, the operator L is 
closed. Next we shall prove that L is an extension of L. By partial integration, 
it follows that, for / e D(L), 

e“ At | f(X(t)) - f(X( 0)) - £ Lf(X(r))dr 

+ A £ e~ Xs |/(X(s)) - /(*«,)) - £ L/(X(r))dr| ds 

= e~ Xt f(X(t )) - f(X( 0)) + f e~ Xs (XI - L)f(X(s))ds. (3.121) 

Jo 

As a consequence upon applying (3.121) once more, the processes 

e~ Xt f(X(t )) - f(X( 0)) + £ e~ Xs (XI - L)f(X(s))ds : t> o| , fe D(L), 

(3.122) 

are P^-martingales for all x e E. Here we employ the fact that the processes 

j/(X(f)) - f(X( 0)) - £ Lf(X(s))ds :t> o| , fe D{L ), 

are P x -martingales. This is part of assertion (ii). From assertion (3.122) it 
follows that 

0 = E x (e~ Xt f{X(t)) - f(X( 0)) + £ e~ Xs (XI - L)f(X(s))ds^j , / g D(L). 

(3.123) 

Let t tend to infinity in (3.123) to obtain 

rco 

0 = -E x (f(X(0))) + e~ Xs E x ((XI-L)f(X(s)))ds, f e D(L). (3.124) 

Jo 

From (3.124) we obtain f{x) = ^ e~ Xs [P(s)(XI — L)f] (x)ds, f e D(L). Or 
writing this differently / = R(X)(XI — L)f , / e D(L). Let / belong to D{L). 
Then / = R(X)g, with g = (A I — L)f and hence / belongs to D(L). Moreover 
we see 

If = L(R(X)g) = XR(X)g - g = Xf - (A/ - Lf ) = Lf. (3.125) 

It follows that L is a closed linear extension of L. In addition we have R(XI — 
L ) = 6'o (E). We shall show that the operator L verifies the maximum principle. 
This can be achieved as follows. Let / in Cq(E) be such that, for some x 0 e E, 

Re (R(X)f)(x 0 ) = sup {Re R(X)f(x) : x e E) > 0. (3.126) 

Then Re (i?(A)/)(xo) A Re R(X)f(X(t)), 0, and hence 

rCO 

Re (R(X)f)(x 0 ) > Re e~ Xs E x{t) (f(X(s))) ds, t> 0. (3.127) 

Jo 


141 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


So that, upon employing the Markov property, we obtain for t ^ 0: 

rco 

Re (R(\)f)(xo) » Re I e - A *E„ (E Y(i) (f(X(s)))) ds 
- Re E„ ([R(A)/] (X(t)). 

Hence, for /u > 0, we obtain 

1 f 00 

—Re ( R(X)f)(xo ) = e ^Re R(X)f(xo)dt 

f 1 Jo 


^ Re 


r oo 

e-<“E „([R(A)/](A-(t)))* 

Jo 


(resolvent equation (3.119) 


= Re R(n)R(\)f(x 0 ) 

R(X)f(x 0 ) - R(fj)f(x 0 ) 


= Re 


H — X 


(3.128) 


(3.129) 


Consequently: [ARe R(X)f] (x 0 ) < Re [fiR(fi)f] (rr 0 ), /i > A. Let ft, tend to 
infinity, use right continuity of paths and the continuity of / to infer 

Re LR(X)f(x 0 ) = Re {XR(X)f(x 0 ) - f(x 0 )} < 0. (3.130) 



AACSB 


ACCREDITED 


Excellent Economics and Business programmes 


university of 
groningen 




www.rug.nl/feb/education 


“The perfect start 
of a successful, 
international career.” 

CLICK HERE 

to discover why both socially 
and academically the University 
of Groningen is one of the best 

places for a student to be 


142 

Download free eBooks at bookboon.com 





Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


This proves that L verifies the maximum principle. Employing the implication 
(iii) => (i) in Theorem 3.45 yields that L is the generator of a Feller semigroup. 
From Proposition 3.44 it then follows that for L the martingale problem is 
uniquely solvable. Since L solves the martingale problem maximally and since 
L extends L, it follows that L = L, the closure of L. Consequently the operator 
L is closable and its closure generates a Feller semigroup. 

(i) => (ii) Let the closure of L, L, be the generator of a Feller semigroup. 
From Proposition 3.44 it follows that for L the martingale problem is uniquely 
solvable. Hence this is true for L. We still have to prove that L is maximal with 
respect to this property. Let L\ be any closed linear extension of L for which 
the martingale problem is uniquely solvable. Define L\ in the same fashion as 
in the proof of the implication (ii) => (i), with Li replacing L. Then L\ is a 
closed linear operator, which extends L\. So that L\ extends L. As in the proof 
of the implication (ii) => (i) it also follows that Li generates a Feller semigroup. 
Since, by (i), the closure of L, also generates a Feller semigroup, we conclude by 
uniqueness of generators, that L = L\. Since L\ => L\ = L\ => L =2 L, it follows 
that the closure of L coincides with L\. This proves the maximality property 
of L, and so the proof of Theorem 3.45 is complete. □ 

In fact a careful analysis of the proof of Theorem 3.45 shows the following result. 

3.47. Proposition. Let L be a densely defined operator for which the martin¬ 
gale problem is uniquely solvable, and which is maximal for this property. Then 
there exists a unique closed linear extension L 0 of L, which is the generator of 
a Feller semigroup. 

PROOF. Existence. Let {P x : x e E) be the solution for L, and assume that 
for all / 6 Cq(E) the function x >—> [ P(t)f] (x) belongs to Co(E) for all t ^ 0. 
Here P(t)f(x) is defined by 



[/>(()/] (*) - E* (/(V(t))), [B(A)/](! 

L„(R(\)f) := \R(X)f - f. J r Co(E). 


Here t 5= 0 and A > 0 is fixed. Then, as follows from the proof of Theorem 3.45, 
the operator L 0 generates a Feller semigroup. 

Uniqueness. Let L\ and L 2 be closed linear extensions of L, which both generate 
Feller semigroups. Let 


{(D, T, Pi), (X(t) 0), (E, £)} 


respectively 


{(D, T, P^, (X(t) :t>O),(0 t :t> 0), (E, £)} 


be the corresponding Markov processes. For every / e D(L), the process 


f(X(t)) — f(X( 0)) — Sf 0 Lf(X(s))ds, t ^ 0, is a martingale with respect to 
as well as with respect to P^. Uniqueness implies Pj, = P| and hence L\ = L 2 . 


The proof of Proposition 3.47 is complete now. 


□ 


143 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.48. Corollary. Let L be a densely defined linear operator with domain D(L ) 
and range R(L) in Cq{E). The following assertions are equivalent: 

(i) Some extension of L generates a Feller semigroup. 

(ii) For some extension of L the martingale problem is uniquely solvable for 
every x e E. 


PROOF, (i) => (ii). Let L 0 be an extension of L that generates a Feller semi¬ 
group. Let {(f1, ft, P x ) ,{X{t) : t 5s 0), (d t : t ^ 0), ( E , £)} be the corresponding 
Markov process. For x e E the probability P x is the unique solution for the 
martingale problem starting in x. 

(ii) => (i). Let L 0 be an extension of L for which the martingale problem is 
uniquely solvable for every x e E. Also suppose that L 0 is maximal for this 
property. Let {P x : x e E] be the unique solution of the corresponding martin¬ 
gale problem. Define the operators P{t), t ^ 0, by [P(t)f] (x) = E x (f(X(t))), 
f e Cq(E). From the proof of Theorem 3.45 it follows that {P(t) : t ^ 0} is a 
Feller semigroup with generator L 0 • 

This completes the proof of Corollary 3.48. □ 

3.49. Example. Let Lq be an unbounded generator of a Feller semigroup in 
Co(E ) and let ///, and zy : , 1 C k C n, be finite (signed) Borel measures on E. 
Define the operator L^p as follows: 

n ( 

-n 

k =1 ^ 

Ljipf = Lof, f e D . 

Then the martingale problem is uniquely solvable for Lpp. In fact let 

{(D, T, P x ), (X(t) :t>O),{0 t :t>O),(E,8')} 

be the strong Markov process associated to the Feller semigroup generated by 
L 0 . Then P = P x solves the martingale problem 


D (L f f) 



(a) For every / - D{Lpp) the process 

/(*(«)) - f(X( 0)) - f Lpj,f{X(s))ds, t e 0, 

Jo 

is a P-martingale; 

(b) P(X(0) =x) = l, 


uniquely. This can be seen as follows. We may and do suppose that the func¬ 
tionals / I—> J Lofdfik ~ $ fdvk, f e -D(L 0 ), 1 ^ k < n, are linearly independent. 
If some Hk belongs to D (Lf ), then D (Lpp) is not dense and if none of the 
measures ///,. belongs to D (L(j). then D (Lpv) is dense in Cq(E'). In either case 
there exists a unique extension, in fact L 0 , of Lpjy which generates a Feller semi¬ 
group. Therefore we choose functions Uk e D(L 0 ), 1 C k E n. in such a way 
that J LoUkdne — $ Ukdvt = 5k/, 1 ^ k, £ ^ n. Suppose that P x and P x are prob¬ 
abilities, that start in x, with the property that for all / e D (Lp,y) the process 


144 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


t i-> f{X(t )) — f{X{ 0)) — Sf 0 Lof(X(s))ds is a P*- as well as a P^-martingale. 
As in (3.110) we see that for all / e D(Lq) (vk = (A/ — L 0 ) Uk, 1 ^ k ^ n): 


fdv k u k (x) 


(3.131) 


f( x ) ~ Yj (J L ° f d ^ k ~ J 

= £° e^E* I - L 0 ) - f] ( J L 0 fdfi k - J fdvk'j u k j (X(s)) j ds 
= e“ As E 2 ^(A I - L 0 ) (f - ( J L ofdfik ~ J fdu^ (X(s)) j ds. 

Write / = (XI — L 0 ) _1 g = R(X)g. From (3.131) we obtain 

(3.132) 


R(X)g(x) - 2 (J (XR(\)g ~ 9 ) dn k - J R(X)gdu^j u k (x) 


= I e -As Ei 


I 

f 


e“ As E? 


9 (X(s)) - p (J (XR(X)g - g) d 4 i k - J R(X)gdv^j v k (X(s)) 
g( x ( s )) ~ p (J (XR(X)g - g ) dg k - J R(X)gdu k ^ v k (X(s)) 


ds 


ds. 


Put F( A) = (Fi (A),..., F n ( A)) and put U(X) = (u k ^( A)), where, for 1 < k < n, 

rCO 

F k ( A) = e~ Xs (El [(XI - L 0 )u k (X(s ))] - E 2 [(XI - L 0 )u k (X(s))]) ds, 

Jo 

and where u k4 , 1 < k, I < n, is given by 


« m (A) = Ja RWu^-ju^-jRWu^. 


Since (3.132) is valid for all g e Cq(E), it follows that F( A) = U(X)F(X). 
Since, in addition lim^oo^-^) = 0, we see F k (X) = 0 for all A > 0 and 
for 1 < k < n. So that e _As E^ (u k (X(s))) ds = J* e _As E^ [tifc(X(s))] ds 
for all A > 0 and for all 1 k n. Again an application of (3.132) yields 
Ej. [g(X(s))] = E 2 [g(A(s))] for all g e Cq(E). Since these arguments are valid 
for any x e E, we conclude just as in Proposition 2.9 and its Corollary on page 
206 of Ikeda and Watanabe [ 61 ]), that P* = P 2 = F x , x e E, In particular 
we may take E = [0,1], L 0 f = \f", D(L 0 ) = {/ e C 2 [0,1] : /'(0) = /'(1) = 0}, 

g, k (I) = 1 i(s)ds, v k = 0, 0 ^ a k < f3 k ^ 1, 1 ^ k ^ n. Then L 0 generates 

the Feller semigroup of reflected Brownian motion: see Liggett [86], Example 
5.8., p. 45. For the operator the martingale problem is uniquely (but not 
maximally uniquely) solvable. However it does not generate a Feller semigroup. 
The previous arguments do not seem to be entirely correct. It ought to be 
replaced with some results in Section 10 (e.g. Theorem 3.110). 


Problem. We want to close this section with the following question. Suppose 
that the operator L possesses a unique extension L 0 , that generates a Feller 
semigroup. Is it true that for L the martingale problem is uniquely solvable? 


145 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


In general the answer is no, but if we require that L solves the martingale 
problem maximally, then the answer is yes, provided as sample space we take 
the Skorohod space. This result is proved in Theorem 3.45. 

For the time being we will not pursue the Markov property. However, we will 
continue with Brownian motion and stochastic integrals. First we give the 
definition of some interesting processes. 



American online 

LIGS University 

is currently enrolling in the 
Interactive Online BBA, MBA, MSc, 
DBA and PhD programs: 


► enroll by September 30th, 2014 and 

► save up to 16% on the tuition! 

► pay in 10 installments/2 years 

► Interactive Online education 

► visit www.ligsuniversity.com to 
find out more! 


Note: LIGS University is not accredited by 
nationally recognized accrediting agency 
by the US Secretary of Education. 

More info here. 


146 



Download free eBooks at bookboon.com 









Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


4. Martingales, submartingales, supermartingales and 

semimartingales 


Let (Q, T, P) be a probability space and let {T t : t ^ 0} be an increasing family 
of ex-fields in 'J. If necessary we suppose that the filtration {T t : t > 0} is right 
continuous, i.e. r J t = or complete in the sense that, for every t > 0, 

the (j-field “J t contains all A e IF, with P(A) = 0. 

Let {H(t) : t ^ 0} be a collection of E-valued functions defined on 0. Such a 
family is called a (real-valued) process. 

3.50. Definition. The following processes and cr-fields will play a role in the 
sequel. 

(a) The process {H(t) : t ^ 0} is said to be adapted or non-anticipating if, 
for every t ^ 0, the variable H(t) is measurable with respect to A t . 

(bl The symbol A denotes the cr-field (= a-algebra) of subsets of [0, go) x Q. 
which is generated by the adapted processes which are right-continuous 
and which possess left limits. These are the so-called cadlag processes. 
(b2) The process {H(t) : t ^ 0} is said to be optional if the function (t, uj) —*• 
H(t,u>) is measurable with respect to A. 

(cl) The symbol II denotes the a-field of subsets of [0, oo) x Q, which is 
generated by the adapted processes which are left-continuous adapted 
processes. 

(c2) The process {H(t) : t ^ 0} is said to be predictable if the function 
(t,cu) —> H(t, uj) is measurable with respect to II. 

3.51. Proposition. The collection {(a, b] x A: 0 ^ a < b, A e T a } generates the 
a-field II. 

Proof. Let A belong to T a . The variable uj « l( a , 6 ](>s) 1.4 (go) is measurable 
with respect to and the function s >—>■ l(a,fo] (s) l^i(co) is left continuous. This 
proves that (a, b] x A belongs to II. 

Conversely let F be adapted and left continuous. Put 



Then, by left continuity, lim n ^oo F n (s, uj) = F(s,u>), E-almost surely. Moreover 
the processes (F n (t) : t p 0} are adapted and are measurable with respect to 
the (j-field generated by {(a, b] xA:0^a<b,Ae T a }. All this completes the 


proof of Proposition 3.51. 


□ 


3.52. Remark. Since L a q(s) x 1 Auj) = lim lr a ^ )(s)l AaA where a n [ a and 
where b n [ 6, it follows that II <= A. Here we employ Proposition 3.51. 

3.53. Definition. Let {X(t) : t ^ 0} be an adapted process. 

(a) The family {X(t) : t 5= 0} is a martingale if E (|X(f)|) < go, t ^ 0, and 
if, for every t > s ^ 0, E(X(t) | T s ) = X(s), P-almost surely. 


147 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


(b) The family {X(t) : t ^ 0} is a submartingale if E(|X(t)|) < oo, t ^ 0, 
and if, for every t > s ^ 0, E (X(t) | £F S ) ^ X(s), P-almost surely. 

(c) The family {X(t) : t ^ 0} is a supermartingale if E (|X(t)|) < go, t 5= 0, 
and if, for every t > s 5= 0, E (X(t) | T s ) < X(s), P-almost surely. 

(d) It is P-almost surely of finite variation (on [0, t]) if 

sup ^ | X(tj) — X(tj- 1 )| : 0 < to < ti < ... < t n < co j < go, 

^sup „ \X(tj) - r 0 « «„ < h s: t} < co,) 

P-almost surely. 

(e) It is a local martingale if there exists an increasing sequence of stopping 
times ( T n : n e N) for which lim n ^oo T n = oo, P-almost surely, and for 
which the processes 


{X(T n a t) : t ^ 0}, n = 1,2,... 


are martingales with respect to the filtration {Tr uA t : t A 0}. 

(f) Let T be a stopping time. The process {X(t) : t ^ 0} is a local martin¬ 
gale on [0, T ) if there exists a sequence of stopping times ( T n : n e N) 
which is increasing for which lim„_> x T n = T, P-almost surely, and for 
which the processes { X(T n a t) : t ^ 0}, n = 1, 2,... are martingales 
with respect to the filtration {Tr n A t-t> 0}. 

(g) The definition of “local submartingale”, “local supermartingale” and 
“being locally P-almost surely of finite variation” are now self-explan¬ 
atory. 

(h) The process {A" (t) : t A 0} is called a semi-martingale if it can be writ¬ 
ten in the form X(t) = M(t) + A(t), where { M(t ) : t ^ 0} is a mar¬ 
tingale and where {A(t) : t ^ 0} is an adapted process which is finite 
variation, P-almost surely, on [0, t] for every t > 0, and for which 
E\A(t)\ < oo, t ^ 0. 

(i) The process {X(t) : t ^ 0} is of class (DL) if for every t > 0 the family 

{ X(t) : 0 ^ r ^ t , r is a (T t ) -stopping time} 
is uniformly integrable. 

3.54. Remark. Let {X(t) : t ^ 0} be a semi-martingale. The decomposition 
X(t) = M(t ) + A{t ), where (M(f) : t ^ 0} is a martingale and where for every 
t > 0 the process {.4(f) : t A 0} is P-almost surely of finite variation and where 
{A(t) : f A 0} is predictable and right continuous P-almost surely is unique, 
provided .4(0) = 0, P-almost surely. This follows from the fact that a right- 
continuous martingale which is predictable and of finite variation is necessarily 
constant: this is a consequence of the uniqueness part of the Doob-Meyer de¬ 
composition: see Theorem 1.24. A proof of the Doob-Meyer decomposition 
theorem may start as follows. Put 



and 


(3.133) 


Mt)-M «)+ 2 e x 


(Kfc<2 n 


k + 1 
2 > 



148 


Download free eBooks at bookboon.com 










Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


and prove Mj(t) := Xj(t ) — Aj(t) is a martingale. Then let j —> go to obtain: 
X(t) = M(t) + A(t), where M(t) = lirn^oo Mj (t) and A(t) = lim^oo Aj(t). 

3.55. Remark. An Sy-martingale {M(t) : t ^ 0} is of class (DL), an increasing 

adapted process {A(t) : t ^ 0} in T, P) is of class (DL) and hence the sum 

{ M(t ) + A(t) : t ^ 0} 

is of class (DL). If {X(t) : t P 0} is a submartingale and if /r is a real number, 
then the process {max (X(t), p) : t ^ 0} is a submartingale of class (DL). Pro¬ 
cesses of class (DL) are important in the Doob-Meyer decomposition theorem. 

We continue with some examples of martingales, submartingales and the like. 

3.56. Example. Let T : fl —> [0, co] be a stopping time. Since T is a stopping 
time and since the process {l{r<q : t 5= 0} is left continuous, it is predictable. 
It follows that the process (l{i>t} : t ^ 0} is predictable as well. 

3.57. Example. Let / be an open interval in M and let </?:/—> (—oo, co) be an 
increasing convex function. If {X(t) : t ^ 0} is a submartingale with values in 
/, then the process {</?(X(f)) : t 5= 0} is also a submartingale. For let t > s ^ 0. 
Then by the Jensen inequality and the monotonicity of cp it follows that 

E [p(X(t)) \? s ]>p [E (X(t) | T s )] ^ p (X(s )). 

3.58. Example. Let (B(t), P 0 ) be one-dimensional Brownian motion starting 
in 0. Then {B(t) : t ^ 0} is a martingale. Since the definition of martingale also 
makes sense for vector valued processes, we also see that an valued Brownian 
motion is a martingale. 

3.59. Example. Let (B(t), P 0 ) be E"-valued Brownian motion starting in 0. 
The process {\B(t)\ 2 — ut : t ^ 0} is a martingale. 

3.60. Example. Let {X(t),Pa;} be a (strong) Markov process such that 

E x [f(X(t))] = J p(t,x,y)f(y)dm(y), f > 0, 

where the density p(t, x. y ) verifies the Chapman-Kolmogorov identity: 

p(s + t, x, y) = J p(s, x, z)p(t, z, y)dm(z). 

The process {pit — s, X(s), y) : 0 < s < t} is a martingale on [0, t). For example 
for X{t) we may take Bit), d-dimensional Brownian motion. Then 

p{t, x, y) = p d (t, x, y) = - . exp 

(V2 Vt) 

3.61. Example. Let {X(t) : t ^ 0} be a right-continuous martingale and let T 
be a stopping time. The process {X (T a t) : t ^ 0} is a martingale with respect 
to {T t : t ^ 0} and also with respect to the filtration {Ttaj : t ^ 0}. 



149 


Download free eBooks at bookboon.com 







Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.62. Example. This is a standard example of a closed martingale, i.e. a 
martingale which is written as conditional expectations on cr-fields taken from 

filtration. Let Y be an random variable in L 1 P). The process s i—► 

[Y | 3s], s ^ 0, is a martingale. 

We want to insert an inequality on the second moment of a martingale. This is 
a special case of the Burkholder-Davis-Gundy inequality. 

3.63. Proposition. Let { M(t ) : t ^ 0} be a continuous martingale with M( 0) = 
0. Then 

E (M(t) 2 ) < E (M*(t) 2 ) ^ 4E (M(t) 2 ) . 

Here M*(t ) = sup 0 ^ t |M(s)|. 



A cate-Lucent 


www.alcatel-lucent.com/careers 


What if 
you could 
build your 
future and 
create the 
future? 




One generation’s transformation is the next’s status quo. 
In the near future, people may soon think it’s strange that 
devices ever had to be “plugged in.” To obtain that status, there 

needs to be “The Shift”. 


150 



Download free eBooks at bookboon.com 












Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Proof. Define for £ > 0 the stopping time by 

T s = inf{t > 0 : M*(t) > . 

Then {M*(t) > £} Q {T% < t} and {T \ < t} cr { M*(t ) h £} and hence, since 
\M(t)\ is a submartingale we obtain upon using Doob’s optional sampling 

rCO 

E (M*(t) 2 ) = \ P (M*(t) 2 > A) dX 

Jo 

(make the substitution A = £ 2 ) 

r*00 rCO 

= 2 >£)<%< 2 £P(T s <t)d£ 

Jo Jo 

rOO 

= 2 E(|M(T { )|:T { <t)de 

Jo 

(Doob’s optional sampling) 

r*00 

<2 E(|M(t)| : T ( < t)d£ 

Jo 

rOO 

= 2 E(\M(t)\:M*(t)>£)d£ 

Jo 

= 2E (\M(t)\ M*(t)) 

(Cauchy-Schwarz’ inequality) 

< 2 (E (M(t) 2 )) 172 (E (M*(t) 2 )) 1/2 . 

Consequently E (M*(t) 2 ) ^ 4E(M(t) 2 ). This completes the proof of Proposition 

3.63. □ 

3.64. Remark. The method of works very well if E(M*(t) 2 ) is finite. If this 
is not the case we may use a localization technique. The reader should provide 
the details. Perhaps truncating is also possible. 


5. Regularity properties of stochastic processes 

In Theorem 3.18 we proved that Brownian motion possesses a continuous ver¬ 
sion. We want to amplify this result. In fact we shall prove that Brownian 
motion has Holder continuous paths of any order a < |. This means that for 
every a < \ and for every a > 0, a e R, there exists a random variable C(b), 
depending on Brownian motion such that for all 0 ^ s < t < o, the inequality 

| b(t) — 6(s)| ^ C(b ) 1 1 — s|“ 

holds P-almost surely. This will be the content of Theorem 3.67 below. We 
begin with a rather general result, due to Kolmogorov, for arbitrary stochastic 
processes. 


151 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.65. Theorem. Fix a finite interval [a, b]. Let (X(s) : a < s < b] be a stochas¬ 
tic process on the probability space (f2,£F, P). Suppose that there exist constants 
K, r and p, such that 0 < r < p < oo and such that 

E (| X(t) - X(s)| p ) ^ K\t - s| 1+r (3.134) 

for all a ^ s,t < b. Fix 0 < a < r/p. Then there exists a random variable 
C(X), which is finite P -almost surely, such that 

\X(t)-X(s)\^C(X)\t-s\ a (3.135) 

for all dyadic rational numbers s and t in the interval [a, b]. In particular it 
follows that a process X = (X(s) : a < s < 6 } verifying (3.134) has a Holder 
continuous version of order a, a < r/p. 


PROOF. It suffices to prove (3.135), because the version problem can be 
taken care of as in Theorem 3.18. Without loss of generality we may and do 
suppose that a = 0 and that 6 = 1 . Otherwise we consider the process Y defined 
by T(s) = X ((a 0 + s( 6 0 — a 0 )), 0 < s < 1, where a 0 and 6 0 are dyadic rational 
with a 0 < a and with b < 6 0 and where outside of the interval [a, 6 ] the process 
X is defined by X(t) = X(a), if ao < t ^ a, and X(t) = X(b), if 6 < t < bo. 
Put e = r — ap. Then 


P(|X(f) -X(s)| > \t-s\ a ) < |f-sp p E(|X(A) -X(s)| p ) < K\t-s\ 1+£ , 

(3.136) 


so that 


Hence 


X 


A: + 1 


-xX 

2 n 


00 2 n —l 

2 2 P 

n= 1 k =0 


x 


k+l 


> 2~ 


xX 


-no-ne 


< K2~ n 2 


> 2 " 


K 


^ K V 2 n 2~ n 2~ ne = - 

*-i 2 e — 1 


n=l 


By the Borel-Cantelli lemma it follows that 


(3.137) 


(3.138) 


un 

\m—l n'Xm 


max 

(K/c^2 n -l 


x 


k + 1\ ( k 

— A 


< 2~ na U = 1 . (3.139) 


Hence there exists a random integer u(X) with the following property: For 
P-almost all u> the inequality 


max 


X 


k + l 



^ 2~ na 


(3.140) 


is valid for n ^ f(X). Next let n 5= f(X) and let A be a dyadic rational in the 
interval [A;2 _n , (k + l)2 _n ]. Write t = k2~ n + Xqli eac h 7 j equals 0 or 

1. Then 


X(t) - X 


k 


< 


m 


Ti 


2 <x(n+j) 

3 = 1 


< 


12 " 


(3.141) 


152 


Download free eBooks at bookboon.com 



















Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Similarly we have, with t = £2 N , N ^ n, (k + 1)2 n = £2 N + Xljli 7 N ^ 
7 ' equals 0 or 1 , 


X(t) - X 


k + 1 




m 


7 


j= 1 


<: 


1 2 ^° 


<: 


1 1 

! - 1 2 no 


(3.142) 


Next let s and t be dyadic rationale numbers with 0 < t — s < 2 Take 

neN with 2 _n_1 ^ t — s < 2~ n and pick k in such a way that k2~ n ~ 1 < s < (fc + 
l)2 _n_1 . Then (fc + l)2 “ n - 1 ^t = t-s + s < 2~ n + (k + l)2~ n ~ l = (fc + 3)2“ n-1 . 
It follows that, since t belongs to [(k + l)2 _n_1 , (k + 2)2 _n_1 ] or to the interval 
[(k + 2)2~ n ~ 1 , (k + 3 ) 2 _n_1 ], 


|X(t)-X( S )| 






X(t) - X 


k + 2 
2 n+1 


1 


_2~{n+l)a < - 


+ 

3 




I* 


k + 2 
2 n+1 


X 


k + 1 
2 n+1 


+ 


X 


k+1 

2 n+1 


■ X(s) 
(3.143) 


If 1 ^ t — s > 2~^ x \ we choose k and £ e N in such a way that 2+^ > £ > k ^ 0 
and that £2~ v<yX ^ < t < (£ + l)2~ u( - x ^ and k2~ v ^ x " > < s ^ (k + l)2~ u( ' X \ Then 
we get 


\X(t)-X(s) |< 


+ 


X(t) - X 
k 


£ + 1 

2"( x ) 


+ T 

j=k 


X 


3 +1 
2 U ( X ) 


-X 


2 "P 0 


X 


<: 


2 L/ ( X ) 
2 + 2 " (x) 


- V(s) 


9 _l 

2~ai/(X) ^ £±£ - |* _ s |a 

2 a — 1 2 a — 1 

From (3.143) and (3.144) the result in Theorem 3.65 follows. 


(3.144) 

□ 


In order to apply the previous result to Brownian motion, we insert a general 
equality for a Gaussian variable X. 


3.66. Proposition. Let X : fl —> E be a non-constant Gaussian variable. Then 
its distribution is given by 


F(XeB) 


1 [ [ 1 k-E(X)| 2 

(2ttE(X 2 -(E(X)) 2 )) 172 Jtf 6XP V 2E(X 2 ) - (E(X)) 2 


and its moments E (|X — E(X)| P ), p > — l, are given by 


^ dx) 

(3.145) 


E(|X-E(X)| P ) 


2 l^r (Ip + i) 


7T 


V E ( A ' 2 - ( E (V)) 2 ) 


(3.146) 


153 


Download free eBooks at bookboon.com 




































Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Proof. Equality (3.145) follows from formula (3.8) and formula (3.146) is 
proved by using (3.145). The formal arguments read (we write Y = X — E(X)): 



The latter is the same as (3.146). □ 

3.67. Theorem. Let { b(s ) : s > 0} be d-dimensional Brownian motion. This 
process is P -almost surely Holder continuous of order a for any a < 





In the past four years we have drilled 

* 


81,000 km 

A 


That's more than twice around the world. 



Whn am wp? fHSHHHH 



We are the world's leading oilfield services company. Working 1 


globally—often in remote and challenging locations—we invent, 
design, engineer, manufacture, apply, and maintain technology 
to help customers find and produce oil and gas safely. 



Who are we looking for? 

We offer countless opportunities in the following domains: 

■ Engineering, Research, and Operations ^ 

■ Geoscience and Petrotechnical 

■ Commercial and Business 

A ^ 


If you are a self-motivated graduate looking for a dynamic career, 
apply to join our team. 

What will you be? 

careers.slb.com 

Schlumberger 


154 

Download free eBooks at bookboon.com 











Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Proof. It suffices to prove Theorem 3.67 for 1-dimensional Brownian mo¬ 
tion. So suppose d = 1 and let a < 1/2. Choose p > 1 so large that a < -. 

2 p 

From inequality (3.146) in Proposition 3.66 with X = b(t ) — b(s ) we obtain 

E (IK*) - mn = E (|6(i — s )f) = C p (E (I b(t - s )| 2 )) p/2 
= C p \t-s\ p/2 = C p \t-s\ 1+r , (3.147) 

where r = p/2 — 1 > pa. An application of Theorem 3.65 yields the desired 
result. □ 


The following theorem says that Brownian motion is nowhere differentiable. 

3.68. Theorem. Fix a > Then with probability one, t >—> b(t ) is nowhere 
Holder continuous of order a. More precisely 

| b(t + h) — b(t)\ 


P ( inf 


lim sup 
h-> 0 


\h\ 


00 


1. 


Proof. For a proof we refer the reader to the literature; e.g. Simon [[ 121 ], 
Theorem 5.4. p. 46]. □ 

In the theory of stochastic integration we will have a need for the following 
lemma. The following lemma can also be proved by the strong law of large 
numbers: see e.g. Smythe [123]. 

3.69. Lemma. Let {b(s) : s ^ 0} be one-dimensional Brownian motion. Then, 
P -almost surely, lim n ^oo Yuk=o IM(^ + 1)2 ~ n t) — b(k2~ n t)\ 2 = t. 

PROOF. Put = \b((k + l)2 -n t) — b (k2~ n t)\ 2 — 2~ n t. Then the variables 
A^ n , 0 ^ k ^ 2 n — 1, are independent and have expectation 0. So that 

f 2 n -l \ 2 2 n —1 2 n —i 2 

2 A fc , n = JE (A fc , n ) 2 = E (|6 (2 ~ n t) | 2 - 2 ~ n t) 


k =0 


k =0 


= 2 n (e | b (2 ~ n t) | 4 - 2E | b (2 ~ n t) | 2 2 ~ n t + 2“ 2n t 2 ) = 2 x 2 ~ n t 2 . (3.148) 

Tchebychev’s inequality gives 



°° / p n ~ l \ \ 2 1 2 

Hence / P / A k n ) > e < —. Thus we may apply the Borel-Cantelli 

n =i y \ k =o ) / 6 

lemma to prove the claim in Lemma 3.69. □ 

3.70. Proposition. Brownian motion is nowhere of bounded variation. 


155 


Download free eBooks at bookboon.com 





Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Proof. Just as in the previous lemma we have that, for t > s ^ 0, 

2 n —1 

lim V 1 |& (s + (k + 1)2 ~ n (t — s)) — b (s + k2~ n (t — s )) 1 2 — (t — s) = 0, 

n—>oo 1 v v 71 

k= 0 

P-almost surely. Since Brownian paths are almost surely continuous it follows 
that (for 6 > 0) 

2 n —l 

0 < t — s ^ lim V \b (s + (k + l)2 _n (t — s)) — b((s + k2~ n {t — s)) | 2 

n —>00 1 v 7 v 71 

fc=0 

< lim inf max \b (s + (£ + l)2~ n (t — s)) — b((s + £2~ n (t — s))\ 

n—>co 0s££t£2 n 1 v ' v y| 

2 n —1 

x Xj \ b ( s + ( fc + — s)) — 6((s + k2~ n (t — s)) | 

fc=0 

^ sup |£>(ct 2 ) — b(<Ji)\ x variation of b on the interval [s, t]. 

The statement in the Proposition 3.70 now follows from the continuity of paths. 

□ 

Next we will see how to transfer properties of discrete time semi-martingales 
to continuous time semi-martingales. Most of the results in the remainder of 
this section are taken from Bhattacharya and Waymire [15]. We begin with an 
upcrossing inequality for a discrete time sub-martingale. Consider a sequence 
{Z n : n e N} of real-valued random variables and sigma-fields Ji c J 2 c • • ■, 
such that, for every n e N, the variable Z n is ^-measurable. An upcrossing of 
an interval (o, b) by {Z n j is a passage to a value equal to or exceeding b from an 
value equal to or below a at an earlier time. Define the random variables X n , 
n e N, by X n = max (Z n — a, 0). If the process { Z n } is a sub-martingale, then 
so is the process {X n }. The upcrossings of (0, b — a) by { X n } are the upcrossings 
of the interval (a, b) by { Z n }. We define the successive upcrossing times r/ 2 /,., 
k e N, of {X n } as follows: 

r\\ = inf {n ^ 1 : X n = 0}; 

772 = inf {n ^ 771 : X n ^ b - a} ; 

772^+1 = inf {77 ^ r] 2k : X n = 0}; 

V 2 k +2 = inf {n ^ 772^+1 :X n >b-a}. 

Then every 77 k is an {T n }-stopping time. Fix N e N and put 77 , = min ( 77 %, N). 
Then every 77 is also a stopping time and 77 , = N for k > [7V/2J, the largest 
integer smaller than or equal to N/2. It follows that X T2k = X N for k > [tt/ 2J 
and we also have 77 k ^ k and so A’ N 77 . N X. Let U^(a, b) be the number of 
upcrossings of (a, b) by the process { Z n } at time N. That means 

U N (a, b) = sup {k^l\r] 2 k^ N} (3.149) 

with the convention that the supremum over the empty set is 0. Notice that 
Upf(a, b) is also the number of upcrossings of the interval (0, b — a) by {A",,} in 
time N. 


156 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.71. Proposition (Upcrossing inequality). Let {Z n j be an {3 n }-submartin¬ 
gale. For each pair (a, b), a < b, the expected number of upcrossings of (a, b ) by 
Zi ,..., Z N satisfies the inequality: 


E (U N (a, b)) < 




E (max (Zjsr — a, 0) — max ( Z\ 
b — a 

E (max (Zjv — Z\, 0)) 
b — a 


a,0)) 


(3.150) 


PROOF. Since X T2k = Xn for k > [A r /2j. we may write (setting To = 1): 
[JV/2J+1 [JV/2J + 1 

X K -X,- 2 (Xr, t _, - x rat _,) + 2 (Xr, t - x nt _,) . (3.151) 

k =1 k=1 

Next let v be the largest integer k with the property that ry. < N, i.e. u is 
the last time < N of an upcrossing or a downcrossing. It readily follows that 
U N (a, b ) = [p/ 2J. If v is even, then 

X T2k — X r , 2k _ 1 ^ b — a provided 2k — 1 < zq 

X T2k — X T2k _ x = X N — X/v = 0 provided 2k — 1 > v. (3.152) 

Now suppose that v is odd. Then we have 


X T2k — X T2k _ 1 5= b — a provided 2k — 1 < u\ 

X r 2k - X r 2k -1 = x r 2k ~ X v > X r 2k - 0 = X T2k provided 2k - 1 = u; 

X T2k — X T2k _ x = X N — Xjsf = 0 provided 2k — 1 > u. (3.153) 

From (3.152) and (3.153) it follows that 

LAT/2J + 1 [u/2\ 

2 (Xr U - x r , k j > y; (x Ta - x Ta _,) 

k—1 k—1 

F [u/2\(b — a) = (b — a){7jv(«, b). (3.154) 

Consequently 

[JV/2J+1 

Xn — Xi ^ ^ ( X T 2k _i — X r 2k _ 2 ) + (b — a)UN(cL, b). (3.155) 

k= 1 

So far we did not make use of the fact that the process {X n } is a sub-martingale. 
It then follows that the process {X Tk : k e N} is a { J Tn (-martingale and hence 
k i—> E (X Tk ) is an increasing sequence of non-negative real numbers. So that 
(3.155) yields 

[N/2\ + l 

E(X N ~X t )» 2 ^(X T2t -X rn _ I ) + (b-a)E(U N (a,b)) 

k=1 

^ {b — a )E (£//v(a, b )). (3.156) 

The desired result in Proposition 3.71 follows from (3.156). □ 


157 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.72. Theorem. (Sub-martingale convergence theorem) Let { Z n } be a sub-mar- 
tingale with the property that sup neN E (\Z n \) < oo. Then the sequence {Z n } 
converges almost surely to an integrable random variable Z<*,. Moreover we have 
E(|Zoo|) ^ liminf n ^ooE(|Z n |). 


3.73. Remark. In general we do not have E(Zoo) = lim^oo E (Z n ). In fact 
there exist martingales { M n : n e N} such that M n ^ 0, such that E(M n ) = 1, 
n e N, and such that M x = lim„_ > . x . M n = 0, P-almost surely. To be specific, 
let {b(s) : s ^ 0} be z/-dimensional Brownian motion starting at x e M !/ and let 
p(t, x, y ) be the corresponding transition density. Fix t > 0 and y =(= x and put 


M n 


p(t/n,b(t - t/n),y) 
p(t, x, y) 


(3.157) 


The process {M n : n e N} defined in (3.157) is P x -martingalc with respect to 
the sigma-fields T n generated by b(s), 0 < s < t — t/n. 



/ f 

Maastricht University 


Join the best at 
the Maastricht University 
School of Business and 
Economics! 

gjpj* 

• 33 rd place Financial Times worldwide ranking: MSc 
International Business 

• 1 st place: MSc International Business 

• 1 st place: MSc Financial Economics 

• 2 nd place: MSc Management of Learning 

• 2 nd place: MSc Economics 

• 2 nd place: MSc Econometrics and Operations Research 

• 2 nd place: MSc Global Supply Chain Management and 

Change 

Sources: Keuzegids Master ranking 2013; Elsevier 'Beste Studies' ranking 2012; 

Financial Times Global Masters in Management ranking 2012 


Maastricht 

University is 
the best specialist 

university in the 
Netherlands 

(Elsevier) 

Master's Open Day: 22 February 2014 

www.mastersopenday.nl | 


158 



Download free eBooks at bookboon.com 





Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Proof. Let [/(a, b ) be the total number of upcrossings of (a, b ) by the pro¬ 
cess {Z n : n e N}. Then U]\r(a, b) ] [/(a, b) as N ] go. Therefore, by monotone 
convergence, 

E((C/(a, &))= lim E b)) ^ sup ^ ^^ N \) + l a l < ^ (3.158) 

£V—>oo TVeN b — CL 

In particular it follows that U(a, b) < go P-almost surely. Hence 

P (lim inf Z n < a <b < lim sup Z n ) K P(I7(a,6) = go) = 0. (3.159) 

Since 

{lim inf Z n < lim sup Z n } = {lim inf Z n < a < b < lim sup Z n } 

a<b,a,beQ 

it follows from (3.159) that P (lim inf Z n < lim sup Z n ) = 0. By Fatou’s lemma 
it follows that E(|Zoo|) = E(liminf n \Z n \) ^ liminf E i\Z n \) < go. 

This completes the proof of Theorem 3.72. □ 

3.74. Corollary. A non-negative martingale { Z n } converges almost surely to 
a finite limit Z cn . Also E (Zoo) A E (Zi). 

Remark. Convergence properties for supermartingales {Z n } are obtained from 
the sub-martingale results applied to {— Z n ). Since a semi-martingale is a 
difference of two sub-martingales, we also have convergence results for semi¬ 
martingales. 

3.75. Definition. A continuous time process {X(t) : t e M} is called stochasti¬ 
cally continuous at to if for every e > 0 

lim P (|X(f) — X(t 0 )\ > e) = 0. (3.160) 


3.76. Remark. Brownian motion possesses almost surely continuous sample 
paths and is stochastically continuous for every t ^ 0. On the other hand 
a Poisson process is stochastically continuous, but its sample paths are step 
functions with unit jumps. In fact, for t > s, X(t) 5= A"(sj P-almost surely and, 


again for t > s, P(X(f) — X(s) 
for t > s and for e > 0, 


n) = e 


A(t _ 3 ) (Mt ~ s)) n 

n! 


and hence, always 


¥(X(t)-X(s) ^ e) < e 


a (t-s) y> fy 5 )) 

n\ 

n= 1 


1 -e“ A(t “ s) . 


3.77. Theorem. Let {X(t) : t ^ 0} be a sub-martingale or a super-martingale 
that is stochastically continuous at each t ^ 0. Then there exists a process 

<X(t):t^0> with the following properties: 


(i) (stochastic equivalence) jx(f)j 
that 

P (x(t) = X(t) 


is equivalent to {X(t )} in the sense 
for every t ^ 0; 


159 


Download free eBooks at bookboon.com 





Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


(ii) (sample path regularity) with probability 1 the sample paths of the pro¬ 
cess jx(i) : t ^ oj are bounded on compact intervals [a, b], a < b < co, 
are right-continuous and possess left-hand limits at each t > 0 (in other 
words \x(t):t^ 0 [ is cadlag). 


Proof. Fix T > 0 and let Qt denote the set of rational numbers in [0,T]. 
Write Qt = Un=i where R n is a finite subset of [0, T] and where T e Ri a 
i ?2 ci R 2 a ■ ■ ■. By Doob’s maximal inequality for sub-martingales we have 

P fmax |X(f)| > a) < E I X ( T )I n = 12,... 

\teRn ) A 

and hence 

P ( sup |X(t)| > A ) ^ lim P (max |X(t)| > A ) ^ , n = 1,2,... 

\te Qt ) \teR n 1 ) A 


For Doob’s maximal inequality see e.g. Proposition 3.107 or Theorem 5.110. 
In particular, the paths of {X(t) : t e Q r } are bounded with probability 1. Let 
(c, d) be any interval in M and let U^ T \c, d) denote the number of upcrossings of 
(c, d) by the process {X(t) : t e Qt}. Then U^(c, d ) is the limit of the number 
U^(c, d) of upcrossings of (c,d) by {X(t) : t e R n j as n tends to co. By the 
upcrossing inequality we have 


E(U {n} (c, d)) 


E(|X(r)|) + |c 
d — c 


(3.161) 


Since U^(c, d) increases with n it follows from (3.161) that 


(3.162) 


and hence that U^ T fc, d) is almost surely finite. Taking unions over all intervals 
(c,d), with c, d e Q, and c < d, it follows with probability 1 that the process 
{X(t) : t e Qt} has only finitely many upcrossings of any interval. In particular, 
therefore, left- and right-hand limits must exist at each t < T P-almost surely. 

To construct a right-continuous version of {X(t)} we define jw(t) : t ^ oj as 

follows: X(t) = lim s p )S6 Q X(s) for t <T. That this process jx(f) j is stochas¬ 
tically equivalent to (W(f)} follows from the stochastic continuity of the process 
(X(f)}. Further details are left to the reader. This completes the proof of 
Theorem 3.77. □ 


Next we prove Doob’s optional sampling for continuous time sub-martingales 
(that are right-continuous) and a similar result holds for martingales (where the 
inequality sign in (3.163) is replaced with an equality) and for super-martingales 
(where the inequality is reversed). For discrete sub-martingales the result will 
be taken for granted: see Theorems 5.104 and 5.114. 


160 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.78. Theorem. Let { X(t ) : t ^ 0} be a right-continuous sub-martingale of class 
(DL) and let T be a stopping time. Suppose t ^ s. Then 

E [X (min(f, T)) | fF s ] ^ X(min(s, T)), P -almost surely. (3.163) 

Proof. Put s n = 2~ n \2 n s], t n = 2~ n \2 n t] and T n = 2~ n \2 n T], If A belongs 
to ‘J s , then A also belongs to 3 r Sn for all n e N. From Doob’s optional sampling 
for discrete time sub-martingales we infer, upon using the (DL)-property, 

E [X (min(f n , T n )) 1 A ] ^E[X (min(.s n , T n )) 1 A ]. (3.164) 

Upon letting n tend to oo and using the right-continuity of the process t > X(t), 
t ^ 0, we infer 

E [X (min(f, T )) 1^] ^ E [X (min(s, T )) 1^], (3.165) 

where A e T s is arbitrary. Consequently the result in (3.163) follows from 
(3.165), and so the proof of Theorem is complete 3.78. □ 


> Apply now 



REDEFINE YOUR FUTURE 

AXA GLOBAL GRADUATE 
PROGRAM 2015 


redefining /standards Qr 



161 


^0 


Click on the ad to read more 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


6. Stochastic integrals, Ito’s formula 


The assumptions are as in Section 4. The process {b(t) : t ^ 0} is assumed to 
be one-dimensional Brownian motion and hence the process t i—> b(t) 2 — t is a 
martingale: see Proposition 3.23. The following proposition contains the basic 
ingredients of (the definition of) a stochastic integral. 

3.79. Proposition. Let si,..., s n and be non-negative numbers for 

which Sj -1 < tj -1 < Sj < tj, 2 < j < n. Let fi,..., f n be bounded random 
variables which are measurable with respect to £F Sl ,..., 3 r Sn respectively. Put 

Y(s, u>) = 2J =1 /i(^) 1 (^,t j ]( s ) 

and write 

f Y ( s > -) db ( s ) = 2" =1 fj tj)) - b(mm(t, Sj ))} . 

Jo J 

The following assertions hold true: 


(a) 

(b) 


The 


process |^F(s)d6(s) : t ^ o| is a martingale and the process 
Y (s)db(s) S j : t ^ o| is a submartingale. 

The process | (So ^ ( s )db(s)j — Y ( s) 2 ds : t ^ o| is a martingale, 

(ltd isometry) The following equality is valid: 


E 


f Y(s)db(i 

Jo 


= E 

f Y(s) 2 ds 


Jo 


(3.166) 


The equality in assertion (c) is called the Ito isometry. It is an extremely 
important equality: the entire Ito calculus is justified by the use of the equality 
in (3.166). 


PROOF. The more or less straightforward calculations are left as an exercise 
to the reader. We insert some ways to simplify the computations. Let F and G 
be predictable processes of the form F(s) = /l( u ,oo)( s ) an d G(s) = gl( v ^ <Xl ' ) (s), 
where / is measurable for the a-fie Id T u and g for T„. Put 

h(F) = f F(s)db(s) := f (b(t) -b(mm(u,t))) 

Jo 

and similarly write 

h(G) = f G(s)db(s) = g ( b(t) - b( min(u, t))). 

Jo 

Without loss of generality we assume v ^ u (otherwise we interchange the 
role of F and G). We begin with a proof of (a). Upon employing linearity it 
suffices to show that the process t <—>■ L(F) is a martingale. (Also notice that 


162 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


So /I (u,v](s)db(s) = I t (F) - where 7\(s) = /l(„ i00) (s).) Fix t > s ^ 0 and 

consider 

E (. I t (F ) | F s ) - I S (F ) = E (. I t (F ) - I S (F) \ F s ) 

= E (/ ( b{t) — &(min(w, t )) — b(s ) + &(min(w, s))) | IF S ) 

= E (E (/ (b(t) - b(mm(u, t )) - b(s) + b(mm(u, s))) | lBnm(max(«, s ),t)) | 3^) 

= E (/E ((6(i) - 6(min(u, t)) - b(s ) + 6(min(u, s))) | 3 r mm( ma x(u, a ),t0 | 3\ s ) 

(Brownian motion is a martingale) 

= E (/ ((6(min(max(n, s), t)) — b(mm(u, t)) — b(s) + &(min(w, s)))) | 3^) 

= 0, 

proving that the process t >—> I t (F ) is a martingale indeed. Next we shall 
prove that the process t >—> I t (F)I t (G ) — F(t)G(t)cIt is a martingale. Using 
bilinearity in F and G yields a proof of (b) and hence also of (c). Again we fix 
t > s and consider 

E (l t (F)I t (G) - £ F(r)G(r)dr | ?£ - (l a (F)I a (G) - £ F(r)G(T)d£) 

= E (l t (F)I t (G) - £ F(r)G(r)dr - (l s {F)I s (G) - £ F(r)G(r)dr^ | 

= E £/ t (F) - I a (F)) (. I t (G ) - I a (G)) - £ F(r)G(r)dr | 

+ E (l a (F) ( I t (G ) - 7 S (G)) + (7 t (F) - 7 S (F)) I S (G) | J s ) 

= E £/ t (F) - J S (F)) (7 t (G) - I a (G)) - £ F(r)G(r)dr | ?£ 

+ I a (F )E (7 t (G) - 7 S (G) | 5 a ) + E (l t (F) - I a (F ) | 5B) 7 fl (G) 


(use the martingale property of 7 f (F) and I t (G )) 


= E 




(I,(F) - /.(F)) (/,(G) - /.(G)) - f F(t)G(t) dr 

J s 

-E[/g(i>(t)-i> (min(max(ti, s),t))) (b(t) — &(min(max(u, s ), *))) 
-fg (t - min(max(n, v, s),t)) | 3%*] 


(use v ^ u and put u a> t = min(max(n, s),t), v s j = min(max(u, s),t)) 


E 


fd ((W) ~ b ( v s,t)Y 


+fg (b(v a>t ) - b(u a>t )) (b(t) - b(v a>t )) - fg (t - min(max(u, v, s), t)) | 5B)] 


= E 


fa 


- b(v 8j t)) - (t - v S: t )) | IB 


+ E [(fg (b(v a , t ) - b(u a , t )) (i b(t ) - b(v att ))) | J a ] 


= E 


/0E 


- &(w s ,t)) 2 -(t- v a M | J, 


' V s ,t 


3B 


163 


Download free eBooks at bookboon.com 











Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


+ E [fg (b(v a>t ) - b(u 8tt )) x E [(b(t) - b(v a>t )) | | J s ] 

(the processes {b(s)} and {b(s) 2 — s} are martingales) 

= E (fg .0 | 5 a ) + E (fg (b(v a , t ) - b(u a , t )) .0 | T s ) = 0. 

The latter yields a proof of (b) (via bilinearity). Altogether this finishes the 
proof of Proposition 3.79. □ 

3.80. Definition. A process of the form 

F(s,u) = Y? j=1 fj( u ) 1 (s j ,t j ](s), 

where 0 < Sj-i < t,_i < Sj < tj, 2 ^ j ^ n, and where the functions fi,... ,f n 
are bounded and measurable with respect to T Sl ,..., 3 r Sn respectively is called 
a simple predictable process. 



Empowering People. 
Improving Business. 


Norwegian Business School is one of Europe's 
largest business schools welcoming more than 20,000 
students. Our programmes provide a stimulating 
and multi-cultural learning environment with an 
international outlook ultimately providing students 
with professional skills to meet the increasing needs 
of businesses. 


B! offers four different two-yea i; full-time Master of 
Science (MSc) programmes that are taught entirely in 
English and have been designed to provide professional 
skills to meet the increasing need of businesses.The 
MSc programmes provide a stimulating and multi¬ 
cultural learning environment to give you the best 
platform to launch into your career 

* MSc in Business 


* MSc in Financial Economics 


* MSc in Strategic Marketing Management 


NORWEGIAN 
BUSINESS SCHOOL 


EFMD 

EQUIS 


*ffi 


* MSc in Leadership and Organisational Psychology 

www.bi.edu/master 


164 



Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.81. Definition. Again let b be Brownian motion with drift zero and let n 2 (6) 
be the vector space of all predictable processes F with the property that 


|F|; := E 


rCC 

IW| S 

Jo 


ds 1 <oo. 


Let Q be the cr-additive measure, defined on the predictable field II, determined 
by 

Q (A x (s, t]) = E [ 1 ^ 4 ] (t — s) = P(yl)(t — s), A e T s . (3.167) 

The measure Q is called the Doleans measure for Brownian motion. 

Then it follows that n 2 (&) = L 2 ([0, 00) x fl, n, Q). Moreover we have 

Uplift = J \F\ 2 dQ, FeU 2 (b). 

It also follows that, for given F e n 2 (6), there exists a sequence of simple 
processes (F n : n e N), which are predictable, such that lim n ^oo \\F n — F\\ b = 0. 


Hence in view of Proposition 3.79 it is obvious how to define F(s) db(s), t ^ 0, 
for F e n 2 (6). In fact 


f F( y s)db(s ) = L 2 - lim f F n (s)db(s ), 
JO n^oo J 0 


where the sequence (F n : n e N) verifies lim n ^oo \\F n F\\ b = 0 and where F n 
belongs to n 2 (&). Let n 3 (fe) be the vector space of all predictable processes F for 
which the integrals ^ |T(s)| 2 ds are finite P-almost surely for all t > 0. In order 
to extend the definition of stochastic integral to processes F 6 n 3 (6) we proceed 
as follows. Define the stopping times T n , n e N, in the following fashion: 


T„ = inf < t > 0 


: f |F(s)| 2 ds > n 

Jo 


(3.168) 


We also write F n (s) = F(s)l{T n > s } an d we observe that F n is a predictable 
process with J |F n (s)| 2 ds F n. Moreover it follows that for n > m the expression 

f F n (s)db(s)~ f F m (s)db(s) = f F(s)l (Tmjmin(Tii;t) ](s)d6(s) (3.169) 

Jo Jo J 

vanishes almost everywhere on the event {T m > t}. So it makes sense to write 

f F(s)db(s) = f F m (s)db(s), on { T m > t}. 

Jo Jo 

Since lim n ^.ooT n = 00, P-almost surely, the quantity ^ 0 F(s)db(s) is unambigu¬ 
ously defined. Hence the integral $ F(s)db(s) is well defined for processes F 
belonging to n 3 (&). 

3.82. Corollary. Let b be Brownian motion and let F and G be processes in 
n 3 (&). The following processes are local martingales: 

"t 


J F(s)db(s) : t > 0 j , j J G(s)db(s ) : 


t ^ 0 


(3.170) 


165 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


J F(s)db(s^j - J |F(s)| 2 ds:t> 0 J ; (3.171) 

f F(s)db(s ) f G(s)db(s) - f F(s)G(s)ds : t> ol. (3.172) 

Jo Jo Jo J 

Put X(t) = F(s)db(s) and Y(t ) = \ ) t 0 G(s)db(s). The following identity is 
valid: 

X(t)Y(t ) - f F(s)G(s)ds = f F(s)Y(s)db(s) + f X(s)G(s)db(s). (3.173) 

Jo Jo Jo 


Proof. The assertions (3.170), (3.171) and (3.172) follow from Proposition 
3.79 together with taking appropriate limits. For the proof of (3.173) we first 
take F = G = 1. Then (3.173) reduces to showing that 

b(t) 2 -2 f b(s)db(s) - t = 0. (3.174) 

Jo 

Notice that (3.174) is equivalent to 2 §* b(s)db(s) = b(t) 2 — t, t ^ 0. For the 
proof of (3.174) we use Lemma 3.69. to conclude: 


b(t) 2 — 2 f b(s)db(s ) 

Jo 

/ 2 n — 

= lim b(t) 2 - 2 J] b (k2~ n t) (b ((k + l)2“ n f) - b (k2~ n t) - t) 


— t 
2 n —1 


k =0 

1™ (s' (ft ((fc + 1)2—*) 2 - 6 (fc2-“0 : 

\ k =0 
2 n —1 


-2 J] b (k2~ n t) (b ((k + l)2 _n t) - b (k2~ n )) - t 

k =0 
/2 n -l 

lim J (b((k + \)2-H)-b(k2-’'t))- 


t = 0. 


\ k =0 


For the proof of (3.173) we then take F(s) = /l( Sl)00 )(s) and G(s) = g 1( S2)00 ), 
where / is bounded and measurable with respect to T Sl and g is measurable 
with respect to T S2 . Formula (3.174) will then yield the desired result. Then we 
pass over to linear combinations and finally to limits. This completes the proof 
of Corollary 3.82. □ 


3.83. Proposition. Stochastic integrals with integrands inU 3 (b ) are continuous 
F-almost surely. 


PROOF. It suffices to prove the result for integrands in n 2 (6). Since Brow¬ 
nian motion is almost surely continuous, it follows that stochastic integrals of 
simple predictable processes are continuous. Let F be in n 2 (6) and choose a 


166 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


sequence (. F n ) ne ^ of simple predictable processes with the property that 

-to \ \ 1/2 / / rto \\ !/2 




rto 

m 


s ) - F n (s)\ ds 


lim [ E 


J^°|F n ^(s)-F n (s)| 2 ds^ 


\F n+i (s) - F n+e -i(s )| 2 ds 


1/2 oo 




E 2 

£= 1 


— n —£—2 2“ n_1 


From Proposition 3.63 it follows that, for k e N, 


E 


sup 

O^t^to 


r 


<: 




2 E 

<=i V 

00 / 

2 2 E 


£=1 

00 


(■ F n+k (s ) - F n (s))db(s) 

f (F n+ ^(s) - F n+ ^_i(s)) d&(s) 

Jo 

f ( F n+i (s ) - F n+ /_i(s)) d6(s) 

Jo 


sup 

O^t^to 
rt 0 


- 2 E( e [ 

r=i ^ LJo 


|F n+ ^(s) - -F n+ ^_i(s)| 2 ds 


< 2 “ 


(3.175) 


From (3.175) the sample path continuity of stochastic integrals immediately 
follows. This completes the proof of Proposition 3.83. □ 



167 



Download free eBooks at bookboon.com 


















Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.84. Remark. The theory in this section can be extended to (continuous) 
martingales instead of Brownian motion. To be precise, let t >-*■ M(t) be a 
continuous martingale with quadratic variation process t >—> (M, M) (t). Then 
the process t >—► M(t ) 2 — (M, M) (t) is a martingale, and the space II 2 (&) should 
be replaced with n 2 (M), the space of all predictable processes t >—► F(t) with 
the property that 


IITll = E 



\F(s)\ 2 d(M,M) («) 


< 00 . 


(3.176) 


The corresponding Doleans measure Qm is given by 


Qm (A x (s, f]) = E [1 A «M, M) {t) - (M, M) (s))], AeJ s , s<t. (3.177) 


It follows that 

n 2 (m) = L 2 (n x [o,oo),n,g M ). 

The space Il 3 (M) consists of those predictable processes t <—>■ F(t ) which have 
the property that \F(s)\ 2 d (M, M) s are finite P-almost surely for all t > 0. 
The definition of stochastic integral to processes F e II 3 (M) we proceed as 
follows. Define the stopping times T n , n e M, in the following fashion: 

T n = inf |t > 0 : J |F(s)| 2 d (M, M) (s) > n j . (3.178) 

As in the case of Brownian motion these stopping times can be used to define 
stochastic integrals of the form F(s) dM(s), F e n 3 (M). These integrals are 
then local martingales. 


Next we extend the equality in (3.173) to the multi-dimensional situation. 

3.85. Proposition. Let s >-*■ a(s) = (c r jfc(s)) 1<j fc<J/ be a matrix with predictable 
entries and with the property that the expression 

rt 


" rt 

Z[ 

j,k= 1 


E|cr ifc (s)| ds 


(3.179) 


is finite for every t > 0. Put a,ij(s) = Yjk= i a ik{ s ) CF jk{s), 1 < i, j < n. Further¬ 
more let {b(s) = (bi(s),... ,b u (s)) : s ^ 0} be u-dimensional Brownian motion. 
Put Mj(t ) = X!fc=i )o a jk{ s )dbk(s), 1 < j < v. Then the following identity is 
valid: 




(3.180) 


V rt V rt rt 

I Afj(s)<jj;j(s)d6^(s) + I (s)ATj(s)c?6/j(s) + I Ujj(s)ds. 

k= l k= l 


Proof. First we suppose v = 2, Mfit) = bi(t) and M 2 (t) 
(3.180) reads as follows: 


bi(t)b 2 (t) = f b 1 (s)db 2 (s) + f b 2 (s)dbi(s). 

Jo Jo 


b 2 (t). Then 


(3.181) 


168 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


In order to prove (3.181) we write 

bi(t)b 2 (t)~ f 6 1 (s)d 6 2 (s) - f 6 2 (s)d 6 1 (s) 

Jo Jo 

2 n —1 f 

= lim | h ((k + 1 ) 2 ~ n t) b 2 ((k + 1 ) 2 ~ n t) - b x ( k2~ n t ) 6 2 ( k2~ n t ) 

n ^°° fc =0 ( 

- 61 (fc 2 “ n t) (ft 2 ((k + 1)2 ~ n t) - b 2 ( k2~ n t )) 

- b 2 (k2~ n t) (b x ((fc + l) 2 _n t) - b\ (k2~ n t)) 


2 n —1 


Jim 2 ( 6 i ((fc + 1)2 ~ n t) - h (k2~ n t)) (b 2 ((k + 1)2 ~ n i) - b 2 (k2~ n t)) . 

(3.182) 


fc =0 


The limit in (3.182) vanishes, because by independence and martingale proper¬ 
ties of the processes b\ and b 2 , we infer 


2 n —1 

E Yi ( & i (( fc + 1 ) 2 ~ nt ) - {k2~ n t)) (b 2 ((k + 1)2 ~ n t) - b 2 

k =0 

( 2 n —1 

2 (6i ((fc + l)2 _n t) - l h (k2~ n t)) 2 (b 2 ((k + l)2 _n t) 

k= 0 

2 n —1 

= E [b x ((fc + l)2 _n t) - b x (k2~ n t)f E (6 2 ((fc + l)2 -n t) 

k= 0 
2 n —1 

= Y i 2 ~ n t) 2 = 2~ n t 2 . 


k =0 


(k2~ n t)) 

— b 2 (fc 2 ""*)) 2 j 

— b 2 (k2~ n t)) 2 

(3.183) 


From Borel-Cantelli’s lemma it then easily follows that the limit in (3.182) van¬ 
ishes and hence that equality (3.181) is true. The validity of (3.180) is then 
checked for the special case that crj k (s) = fjk^( Sjk ,co){s), where fj k is measurable 
with respect to “J s . k . The general statement follows via bi-linearity and a lim¬ 
iting procedure together with equality (3.173) in Corollary 5.142. The proof of 
Proposition 3.85 is now complete. □ 


Next let M(t) = (M x (t ),..., M u (t)) be a i/-dimensional martingale as in Propo¬ 
sition 3.85 and let A(t) = (A x (t ),..., A u (t)) be an adapted z^-dimensional pro¬ 
cess that P-almost surely is of bounded variation on [0, t] for every t > 0. This 
means that 

sup sup |A(sj) — A(sj_i)| is finite P- almost surely for allt > 0. 

ne N 0^so<si<---s n ^t 

It follows that the random set function /j, A : (a, b] > A(b) — A (a) extends 
to an PC-valued measure on [ 0 , t] for every t > 0 . Stieltjes integrals of the 
form F(s)dA(s) may be interpreted as ^F(s)dA(s) = F(s)dju A (s). The 
process A may have jumps. This is not the case for the process M. The latter 


169 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


follows from Proposition 3.83. The process X : = A + M is a ^-dimensional 
semi-martingale with the property that E (|M(i)| 2 ) < go, t > 0. Put 

Mt) = E»« (*(») - ,*(<<-)) 

= 2 (M(s) - M(s—)) + 2 (A(s) - A(s-)) = J A (t). 

S^t s^t 

The definition of J 4 (t) does not pose to much of a problem. In fact for P-almost 
all u) the sum ^ |A(s,o;) — t4(s—, w)| < oo. The process {X(t) — Jxif) • t ^ 0} 

is P-almost surely continuous. 



Brain power 


By 2020, wind could provide one-tenth of our planet’s 
electricity needs. Already today, SKF’s innovative know¬ 
how is crucial to running a large proportion of the 
world’s wind turbines. 

Up to 25 7o of the generating costs relate to mainte¬ 
nance. These can be reduced dramatically thanks to our 
(^sterns for on-line condition monitoring and automatic 
lul|kation. We help make it more economical to create 
cleanSkdneaper energy out of thin air. 

By sh?fe|ig our experience, expertise, and creativity, 
industries can boost performance beyond expectations. 

Therefore we need the best employees who can 
kneet this challenge! 


Power of Knowledge Engineering 


Plug into The Power of Knowledge Engineering. 
Visit us at www.skf.com/knowledge 


170 



Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


The following result is the fundamental theorem in stochastic calculus. 

3.86. Theorem (Ito’s formula). Let X = (Xi,...,X u ) = A + M be a u- 
dimensional local semi-martingale as described above, and let f : IF —> R be a 
twice continuously differentiable function. Put atj ft) = 'ffjk=i <T ik(t) cr jk(t), 1 < i, 
j < v. Then, P -almost surely, 

- f(X( 0)) + 2(/(*(»)) - / (X(s—)) - V/ (X(s- j). (X(s) - X(s-)) 

S^t ' 

+ f V/(X(s-)) • dX(s) + \ V C D i D j f(X(s))a ij (s)ds. (3.184) 
Jo * iJ=1 Jo 


Before we prove Theorem 3.86 we want to make some comments and we want to 
give a reformulation of Ito’s formula. Moreover, we shall not prove Ito’s formula 
in its full generality. We shall content ourselves with a proof with A = 0 . 


Remark. The integral ^ Vf(X(s—)) dX(s) has the interpretation: 

( Vf(X(s-)) dX(s) 

Jo 

= 2 ( \ Dif(X(s-)) dMfs) + f Dif(X( S -)) dAi(s) 
i=1 \Jo Jo 

= y ( f Dif(X(s-)) dMfs) + f Dif(X( S -)) dAfs) 
i=1 \J0 Jo 


(3.185) 


2(2 fn i f(X(s-))a ik (s)db k (s)+ f A 

i= 1 \fc=l *^° 


f(X(s-))dA t (s) . 


Here X = M + A is the decomposition of the semi-martingale in a martingale 
part M and a process A which is locally of bounded variation. 


For z/-dimensional Brownian motion we have the following corollary. 

3.87. Corollary. Let b(t) = (bft ),..., Kft)) be is-dimensional Brownian mo¬ 
tion. Let f : IF —*• M be a twice continuously differentiable function. Then, 
P -almost surely, 

m*)) = HHP)) + f Xf(b(s))db(s) + 1 f A f(b(s))ds. (3.186) 


In fact it suffices to suppose that the functions Dif ,..., D u f and Dff ,..., Dff 
are continuous. Next we reformulate Ito’s formula. 

3.88. Theorem. Let X = (Xi,...,X„) be a is-dimensional right continuous 
semi-martingale as in Theorem 3.86 and let f : PC —» M be a twice continuously 
differentiable function. Then, P -almost surely, 

f(X(t)) = f(X(0 )) + f V/ (X( S -)) • dX(s) (3.187) 

JO 


171 


Download free eBooks at bookboon.com 



Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


+ Y f \\l-a)D i D J f((l-a)X(s-)+aX(s))dad[X i ,X j \(s), 

A A -1 JO JO 


i j—l 'JO JO 


where 


[X„ Xj] (t) = f a tj ( S )d S + ^ (A',( s ) - X,(s-)) (X,( s ) - X,(s-)) . (3.188) 

Jo 8^t 

Remark. In the proof below we employ the following notation. Let Mi be 
martingale of the form 

v rt 

Mi(t) := 2 a ik (s)db k (s). 
k= i J o 

Then quadratic covariation process (Mj, Mj) (t) satisfies 

V r-t 

(Mi, Mj) (0 = 2 a ik (s)a jk (s) ds. 

k =l 


Proof of Theorems 3.88 and 3.86. Since, for a and b in 
f(b) -/(a) = V/(a).(6 — a) 

V r i 


(3.189) 


IS pi 

+ 2 0 “ cr )DiDjf ((1 - a)a + ab) da x - a^bj - a,) 

i,j =1 


and since 


[V, X,] (t) = f aii ( S )d S + 2 (XiM - *<(«-)) (Xj( s ) - X,( S -)), 

J° 


it follows that 
z' r t r i 


2 f f (l-a)D i D J f((l-a)X(s-) + aX(s))dad[X l ,X j ](s) 
ij =i Jo Jo 

= 2 f f (! ** v)DiDjf ((1 ^a)X(s-) + aX(s)) daa i:j (s)ds 
i,j =l J o Jo 

+ 2 2 fo - °)DiDjf ((1 - a)X(s~) + aX(s)) da 

i,j = l s^t Jo 

x (X<(s) ~ X<(s-)) (Xj(s) - X^s-)) 

- y f f (l-cr)D i D J f((l-c7)X(s-) + aX(s))daa ij (s)ds 
ij-i J « J » 

+ 2 t/(X(s)) - f (X( S -)) - V/ (X(s-)). (X(s) - X(s-m . (3.190) 




So the formulas in Theorem 3.88 and Theorem 3.86 are equivalent. Also notice 
that, since \ ) t Q aij(s)ds is a continuous process of finite variation (locally), we 


172 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


have 


JT< 


(1 — a)DiDjf ((1 — (j)X(s—) + crA(s)) daaij(s)ds 


J7< 

rt 


(1 — a)DiDjf ((1 — a)X(s-) + aX(s —)) daaij(s)ds 


= \^D,D J f(X(s-))a, j (s)ds. 


Hence it suffices to prove equality (3.187) in Theorem 3.88. Assume A(0) = 
M( 0) and hence A(0) = 0. Upon stopping we may and do assume that in 
X = M + A, \X(t —)| < L and var A(t—) < L. This can be achieved by 
replacing X(t) with A (min(f, r)), where r is the stopping time defined by 

r = inf {s > 0 : max (\M(s) \, varA(s)) > L} . 

Here varA(s) is defined by 


var 


A(s) = sup ^ |A(sj) - A(s i _i)| : 0 < s 0 < s 1 < ... < s n = s j- . 

3 = 1 









Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Next we define, for every n e N, the sequence of stopping times {T nk : k e N} 
as follows: 


T n .o — 0; 

T nM i = inf is > T n , k : max(s - T n>fc , |X(s) - X(T n>k ) |) > - 

( n 

Since 

max(T n;fc+1 — T Ujk , \X(T Utk+ i ) — X(T Hjk )|) ^ , 

Th 

it follows that lim T n & = go, P- almost surely. Moreover, since 

fc—► oo 

max (T n k+ i - T n>k , \X(T n>k+1 ~) - X(T„ )fc )|) < 

n 


(3.191) 


we have T nk+ i — T n k ^ . Next we write: 


1 

n 


f(X(t )) - f(X( 0 )) 

00 

= 2 {/ (X (T n>k+1 a t-)) - / (X (T n , fc a t)) 


fc =0 


+/(* (T n ,/c+l a t))-f(X(T n ,/c+l ^ *-))} 

00 f r Tn ’ /e+iAt_ 

= 2l Vf(X(T n , k At))-dX(s) 

k =0 f 

i/ 

+ 2 ( X - a ) D i D jf ((! - <0^ (■ T n,k At) +aX (T nM 1 a t-)) dcr 
*,l=i 

x (V Un,fc+i A ) — Xi {T n , k a t)) (Xj {T n ,k +1 A ) — Xj (T n> fe a t)) 

+ / (X (T n , fc+1 At))- / (X (T nM1 a H)|- (3-192) 

On the other hand we also have: 

f Vf(X(s-))dX(s) 

jo 

+ y f [\l - a)D t D 3 f ((I - a)X{s~) + aX(s)) dadiX^X^is) 
i,j =l J o do 

00 f r T n,fe+i At - 

= 2] Vf(X(s-))-dX{s) 

k=0 f dT u , fc At 

+ 2 f "’ fc+1 * f (l - ^DtDjf ((1 - a)X(s-) + aX(s)) dadiX^X,]^) 

ij=oJT n , k At JO 

+ V/ (X (T n;fc+ i a t —)). (X (T n;fc+ i At) — X (T njfc+ i a t—)) 

+ 2 0 - a ) D i D jf ((! - cr ) x ( T n,k+1 A t~) + aX (T nM 1 A t)) dcr 

i,. 7 = 1 


174 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


x (Xi (T nj k+ 1 At) — Xi (T n? / C+ 1 a t—)) (Xj (T n?fc+1 At) — Xj (T n> jfe + 1 a t —)) 


00 ( r T ^+ lAt- 


su 


Xf(X(s-))-dX(s) 


k =0 V ln ’ k 


T n b a t 


+ Y r"’ fc+lAt f(l - ((1 - a)X(s-) + <7l( S )) rfarf [I t) I,] ( 5 ) 

i,j=0 ^ T n,k At JO 

+ / A (Afc+l At))- / (X (Tn.fc+l A t-))|. (3.193) 

Upon subtracting (3.193) from (3.192) we infer by employing Proposition 3.85: 

- f(X( 0)) - f V/ (A(o-)) <fA(s) 

Jo 

- y f f (1 - a) A^i/ ((1 - W [li, I 3 ] (s) 

i,j= i J o Jo 
oo f r>i 

- 2-1 

fc=0 l J1 1 


r * r ^'n,k +1 A t 


(V/ (A(s-)) - V/ (A' (T n , t ))) ■ dX(s) 


' T n 


Y f { f (f - <x) DiDjf ((1 - a)X (T, hk At) + aX ( T nMl a t-)) da 

ij=\^T n k At V Jo 

- J (1 - cr)DiDjf ((1 - (r)X(s-) + aX(s)) da | d [Xj, Xj] (s) 

V 

+ Y 0 - cr) DiDjf ((1 - a)X (T Ujk At) + aX (T nM1 a t-)) da 
i,j=1 

X |(Xj (T n> k+1 A t—) — Xi (Tn )k A t)) (Xj (T n ,fc + 1 A t —) — Xj (T n> fc A t)) 


— [Xj, Xj] (T n ,fc +1 a t —) + [Xj, Xj] (T n> fc a f) 


00 f rT„, fc+ 1 At- 

s -r 

k = 0 I, JT n,k^t 


(V/ (X(s-)) - V/ (X (r„, fc ))) • dX( S ) 


+ S f "’ fe+1 * f (1-0-) {DiDjf ((1 - a)X(T U}k a t)+aX (T n>k+1 a t-)) 

JT ri ' k At Jo 

-DiDjf ((1 - a)X(s-) + crX(s))} dad [Xj, X,] (s) 

V r* 1 

+ 2 (1 - a)DiDjf ((1 - a)X(T Utk a t) + aX (T n ,fc+i a t-)) da 

M = 1 

rT n ,k+1 

x < (Xj(s—) — Xj (T n>k a t)) dXj(s) 

JT njk At 


175 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


rT n ,k + l A t 

+ {Xj{s-)-Xj{T n , k ^t))dXi{s) 


(3.194) 


We shall estimate the following quantities: 

00 rTn,k + l^t — 


^ r-Ln,k +1 

E I I 2 (A/ (X(s-)) - A/ {X (T n , fc a t))) ■ dMi(s) 1 | ; (3.195) 

k=o ^ At 

^ rT n<k+ 1 At- \ 

2 (A/ (X(s-)) - DJ (X (T n , k a t))) • dAfs) ; (3.196) 

k = 0^Tn,k A t ) 

( 00 rT njk +l A t- pi 

2 (1 - <?) {D l D j f ((1 - a)X(T n , k a t) + aX (T n , fc+1 a t-)) 

\ k =0 % ' T n,k A t Jo 

(3.197) 


E 


E 


-DiDjf ((1 - a)X(s-) + aX(s))} dad [X t , X,] (s) 


E 


( 00 pi 

2 (1 - a)DiDjf ((1 - a)X(T n)k a t) + aX (T nMl a £-)) da 

k=0 

rTn,k+l^t— \ 2 \ 

X (Xi(s-)r- Xi(T ntk At))dMj(s)\ ; (3.198) 

JT n , k At ) ) 

( 00 

y (1- (j)DiDjf ((1 - a)X(T nt k a t)+aX ( T Ujk+1 a t-)) da 
k=0 J o 


(Xj(s—) — Xi (T U)k a 0) dAj(s) 

JT n ,k A t 


X 


(3.199) 


Since the process (A/ (X(s—)) — A/ (X (T nk a £))) dMfs), u ^ 0, is a mar¬ 
tingale, the quantity in (3.195) verifies 

, / 00 fWfc + lAt- X 2N 

Ely (A/ (X(s-)) - A/ (X (T n , fc A t))) • dMfs) 

\ k =odT ntk At 


w I ( rT n ,k+i^t- 

2 E (A/ (X(s-)) - DJ (X (T n , fc a t))) • dMi(s) 

£■=-0 \ \JTnUAt 


k = 0 \ \ Kjl n,k 

00 / ^T n>fc + 1 At- 


S E 1 


_ (A/ (X(s-)) - A/ (X (T n , fc A £)))" • d {Mi) (s) 

fc = 0 \JTnjAt 

< sup |A/(2/) - D i f(x)\\E((M i ){t)). (3.200) 

x,yeR. u :\y— x|^l/n,max(|2c|,|2/|)<2L 


Similarly we obtain an estimate for the quantity in (3.198): 

00 pi 

E| I y J (1 - a)DiDjf ((1 - a)X{T n)k a t) + aX {T nM1 a £-)) da 


176 


Download free eBooks at bookboon.com 










Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


rT n , k +1 A t 

0 - °) D i D jf ((1 - cr)X(T njk a t) +crX ( T nM1 a /-)) da 

f*T n ,k +1 
JT n ^/\t 

( 00 r^n,A: + l A ^ — 

sup |A£>i/(y)|E 2 

\fc=0 'JTn,k At 

(3.201) 


E 


< 


1 


|j/|<2L 
2 U.|sC2L 


sup lAA'/MIx^EKMiXt)). 



qaiteye 

Challenge the way we run 


EXPERIENCE THE POWER OF 
FULL ENGAGEMENT... 


RUN FASTER. — p 

RUN LONGER.. 

RUN EASIER... > 




177 

Download free eBooks at bookboon.com 









Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


The other estimates are even easier: 


= E ^ 


00 rT n ,k+i*t— 

2 (Dif (X(s-)) - Dif (X (T„, fc ))) • dAi(s) 

k — 0 ,k 

f(y ) - Dif(x) I .E Q IdAis)^ , (3.202) 


< sup |D 

x,yeMy:\y— a:|^l/n,max(|a;|,|y|)^2L 
00 rT n . k+1 At- rl 


E 


^ r*n,k+ lAt- r 1 

2 (1 - a) {DiDjf ((1 - a)X(T nifc Af) + al (T n>fc+ i a £-)) 

Jo 


-DiDjf ((1 - a)X(s-) + aX(s))} dad [X h X)] (s) 

sup |Ar)j/(y) - DiDjf (x)\ 

^ x,yeR u :\y— a:|^l/n,max(|x|,|y|)^2L 

xe([ Idp^lMl) 

sup |A^-/(y) - A^j/(®)| 

^ x,yeM^:|y—x|^l/n,max(|a:|,|2/|)<2Z/ 

X y/E([Xi,Xi] (0)^E([X„X i ] (0), and 


(3.203) 


E 


00 r i 


V (1 - a)DiDjf ((1 - a)X(T U:k a t) + aX (: T n>k+1 a t-)) da 


k =o 


rTn,k+l 

(Xi(s-) - Xi (T ntk A t)) dAj(s) 


X 


1 


< — sup | DiDj 

yeR l ',|j/|< 2 L 


f(y)\ E (J o I dAj(s) 


(3.204) 


The inequality (3.203) will be established shortly. The quantities (3.200), 
(3.201), (3.202), (3.203) and (3.204) tend to zero if n tends to infinity. Conse¬ 
quently, from (3.194) it then follows that, P-almost surely, 

}(X(t)) = /(X(0)) + f Vf(X(s))dX(s) (3.205) 

Jo 

+ Y f \\l-a)D i D j f((l^a)X(s-)+aX(s))dad[X i ,X j ](s). 

i,j = l J ° J ° 

So that the formula of Ito has been established now. For completeness we prove 
the inequality 


E 


£ Id [V, x,} Ml) ' v'E([X.,X 1 ](t))y , E([M,M](t)). (3.206) 


178 


Download free eBooks at bookboon.com 












Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


A proof of (3.206) will establish (3.203). For an appropriate sequence of subdivi¬ 
sions 0 = Sq 1 ^ < ^ < • • • < s^ n = t we have with, temporarily, \Xj\ = [X^ X^], 


rt 

\dlXuXj] (s)| = lim £ \lX it Xj] ( 4 ”>) - lX lt Xj] ( 4 ”_\ 

Jo n ^ 00 ^i l V 7 V 


<4 


Jim 2 J[Xi] (4"’) - [Xi] (4 %)J[Xj] (4”’) - (44 

% k =1 V ’ 


/ N n \ 1 / 2 

« Um (2 ([*<,*] (4">) - [A,,*] (4"_>i)) j 
( £ (iXi.Xj] (4” 1 ) - [Xj,Xj] (4” , 1 ))J 

“(PC, A',] (t) - [A,, A,] ( 0)) 1/2 ([Aj, Aj] (t) - [A,, Xj] ( 0)) 1/2 (3.207) 


Taking expectations and using the inequality of Cauchy-Schwartz once more 
yields the desired result. 

This completes the proofs of Theorems 3.86 and 3.88. □ 


Remark. In the proof of equality (3.194) there is a gap. It is correct if the 
process ^4 = 0. In order to make the proof complete, Proposition 3.85 has to be 
supplemented with equalities of the form (M*(f) = Yjk =l So a ik( s )db k (s )): 


rt v 

) = Mi(s)dAj(s ) + V a ik (s)Aj(s)db k (s)-, 

Jo k=l Jo 


Ai(t)Aj(t) — f Ai(s)dAj(s ) + f Aj(s)dA t (s). 

Jo Jo 


This kind of equalities is true for continuous processes. If jumps are present 
even more care has to be taken. We continue with some examples. We begin 
with the heat equation. 


Example 1. (Heat equation) Let U be an open subset of IT, let f : U —* M be 
a function in G'o ( E ) and let a : [0, co) x U —*• M be a solution to the following 
problem: 

f ^ | A u in [0, co) x U; 

I u is continuous on [0, oo) x U and u(0, x) = f(x). 

Moreover we assume that lim u(t,x ) = 0 if b belongs to dU. Then u(t,x ) = 

x^>b,xEU 

E x [f(b(t)) : t > t\, where r is the exit time of U: r= inf {s > 0 : b(s) e 
Of course {b(s) : s ^ 0} stands for //-dimensional Brownian motion. In order to 
prove this claim we fix t > 0 and we consider the process {M(s) : 0 ^ s < t} 
defined by M(s) = u(t — s, &(s))l{ T>s }. An application of Ito’s formula yields 


179 


Download free eBooks at bookboon.com 





Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


the following identities: 

r du r 

M(s)-M( 0) = — I ~ r,b(r))l {T>r} dr + I X7u(t - r, 6(r))l {T>r} • db(r) 

i r 

-J Au(t - r,b(r))l {T>r} dr 


I'i 


+ 


r 


- r, b(r )) + u(t - r, b(r )) \ 1 {T>r} dr 

f Vu(t - r,b(r))l {T>r y db(r) 

Jo 


Vu(t - r,b(r)) l {T>r} • db(r). 


Consequently, the process {M(s) : 0 < s ^ t} is a martingale. It follows that 
u(t,x) = E x (u(t,b(0))) = E a; (M(0)) = E x (M(t)) 

= E x (u(0,b(t))l { T >t l) = Ex (/(&(*))l(r>«)) ■ 


This e-book 
is made with 

SetaPDF 



SETASIGN 




PDF components for PHP developers 


www.setasign.com 


180 

Download free eBooks at bookboon.com 







Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Example 2 . Let U be an open subset of M l/ , let / belong to C'q(U) and let 
g : [0, go) x U —> R be a function in Cq(E) and let u : [0, go) x U —> R be a 
solution to the following problem: 

du 1 . r x „ 

< ~dt = 2 Au + 9 m [°> G0 ) x u ’ 

u is continuous on [0, go) x U and u( 0, x) = f(x). 

Moreover we assume that lim x -*b,xeu u (t, x) = 0 if b belongs to dU. Then 
u(t, x) = E x ( f(b(t )) : r > t) + E^ g(t — r, b(r))dr^J , where, as in Ex¬ 

ample 1 , r is the exit time of U. Also as in Example 1 , {b(s) : s 5 = 0} stands for 
//-dimensional Brownian motion. A proof can be given following the same lines 
as in the previous example. 


Example 3. (Feynman-Kac formula) Let U be an open subset of R", let 
/ belong to C$(U) and let V : U —> R be an appropriate function and let 
u : [0, go) x U —*• R be a solution to the following problem: 

r ^ 

— = - Au — Vu in [ 0 , oo) x U: 

< dt 2 v i ) 

u is continuous on [0, go) x U and u(0,x) = f(x). 

Moreover we want that lim x ^b,xeu u (t, x) = 0 if b belongs to dU. Then u(t, x) = 
E x ^exp f 0 V (b(r))drj f(b(t)) : r > t). 

For the proof we fix t > 0 and we consider the process {M(s) : 0 ^ s < t} dehned 
by M (s) = u(t - s, b(s)) exp (- £ V(b(r))dr) l{ r>s } and we apply Ito’s formula 
to obtain: 


M(s) - M( 0) 


ri 


exp 


— r, b(r)) -\ —A u(t — r, b(r)) — V(b(r))u(t — r, b(r)) 

(J L Z 


+ 


^V(b(f,))dp 

S7u{t — r,b(r)) e 

Jo 


'-{r>r} 


dr 


JV(Kp))dp) l{r>r} • db(r) 

= J Vu(t - r, b(r)) exp J V(b(p))dpj l{ T>r } • db(r). 

Here we used the fact that u is supposed to be a solution of our initial value 
problem. It follows that the process {M(s) : 0 < s < t} is a martingale. Hence 
we may conclude that 

u(t,x) = E x [M(0)] = E x [. M(t )] 

/ rt 

= E, 


= E t 


«( 0 , b(t)) exp J V(b(p))dpj : r > t 

f(b(t)) exp J V(b(p))dpj : t > t 


181 


Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Example 4. (Cameron-Martin or Girsanov transformation). Let U be an open 
subset of R", let / belong to Cq(U) and let c \U —> EG be an appropriate vector 
field on U and let u : [0, oo) x U —*■ R be a solution to the following problem: 

= |A u + c.V« in [0, oo) x U] 
u is continuous on [ 0 , oo) x U and u(0,x) = f{x). 


Moreover we want that lim x ^b,xeu u {t, x) = 0 if b belongs to dU. Then u(t, x) = 
E x (exp (Z(t)) f(b(t)) : r > t), where Z(t) = f 0 c(b(r )) • db(r) - § \ c (K r ))\ 2 dr - 

For a proof we fix t > 0 and we consider the process {M(s) : 0 ^ s < t} defined 
by M(s) = u(t — s, b(s)) exp (Z(s)) 1{ T>S }. An application of Ito’s formula to 
the function f(s, x, y ) = u(t — s, x) exp (y) will yield the following result 

M(s) - M( 0 ) 

= J ~ r,b(r))exp(Z(r))l {T>r} dr 

rs 

+ 


+ 


+ 


f Vu(t - r, b(r )) exp (Z(r)) l {r>r} • db(r) 

Jo 

I u(t — r, b(r)) exp (Z(r)) dZ(r ) 

Jo 


■y pmin(s,r) 


f 


+ 


2 

v r s 

51 


A u(t — r, b(r)) exp (Z(r)) 1 { T>r }dr 
Dju(t - r, b(r )) exp (Z(r)) 1 {r>r] d (bj, Z) (r) 


+ 2 


I'i 


^ J u[t- r, b{r)) exp (Z(r)) 1 {T>r} d (Z, Z) (r) 


c)ijj 1 

— ~^(t — r, b(r )) H—A u(t — r, b(r )) + c(b(r)) .V u(t — r, b(r )) 

(J L Z 


exp (Z(r)) l{ T>r }dr 


+ 


+ 2 


f {Vu(t — r, b(r)) + u(t — r, b(r))c(b(r))} exp (Z(r)) l{ r>r } • db(r) 

Jo 

7} J u(t- r, b(r )) exp (Z(r)) \c(b(r)) | 2 l {T>r} dr 
^ J u(t — r, b(r)) exp (Z(r)) \c(b(r))\ 2 l {T>r} dr 


= f {S7u(t - r, b(r)) + u(t — r, 6 (r))c( 6 (r))} exp (Z(r)) l{ r>r } • db(r). 

Jo 


As above it will follow that u(t, x) = E^ (exp (Z(t)) f(b(t )) : r > t). 

Example 5. (Stochastic differential equation). Let (cr(x))J fc=1 , x e R", be a 
continuous square matrix valued function and let c(x) be a so-called drift vec¬ 
tor field (see the previous example). Suppose that the process (A x (s) : s ^ 0} 


182 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


satisfies the following (stochastic) integral equation: 


X x (t)=x + f c(X x (s))ds+ f a(X x (s)) ■ db(s). 

Jo Jo 


In other words the process {X x (s) : s ^ 0} is a solution of the following stochas¬ 
tic differential equation: 


dX x (t) = c(X x (t))dt + a(X x (t)) ■ db(t ) 

together with X x (0) = x. The integral \ 0 a(X x ($))db($) has the interpretation 

f a(X x (s))db(s) = (2 f <?jk(X x {s))db k {s) 

Jo \ k=l Jo 

Next let u : [0,co) x l 1 ' ^ 1 be a twice continuously differentiable function. 
Then, by Ito’s lemma, 



u(t- s,X x (s)) -u(t,X x ( 0)) 

+ f X7u(t-r,X x (r)) -dX x (r) 

Jo 

+ \ 2 [ DfiMt-r,X*(r))i{X*,XI) (r). 

.<J-i J » 

Next we compute 

V 

d (.X x , XI) (r) = 2 a jm (. X x m(r )) a kn (. X x (r )) d (b m , b n ) (r) 

m,n— 1 

= 2 <7 jm (X*(r))a km {X*(r))dr = (a(.X’(r))< r (X*{r)) T ) jh dr, 

m— 1 

where a(x) T is the transposed matrix of a(x). Next we introduce the differential 
operator L as follows: 

-| IS V 

[Lf] (x) = - (■ a(x)a(x) T ) jk D j D k f(x ) + c j (x)D j f(x). 

j,k =1 J = 1 

For our twice continuously differential function u we obtain: 


u(*-s,X*(s)) -u(t,X x (0)) 

= - J ^ (t-r,X x (r))dr 

+ 2 fc,-(X*(r))D iU (t-r,X*(r))dr 
i=i 

+ f Vm (f — r, X x (r)) g (X x (r)) ■ db(r) 

Jo 


183 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


+ \ 2 f (<t (X x (r)) a (X x (r)) T )j k D 0 D k u (t - r , X x (r)) dr 
Z j,k =i J o 

= J V'u (t — r, X x {r )) a ( X x {r )) • c?6(r) + J ^L — u(t — r, X x {r )) dr. 


So that, if yL — —J u = 0, then, for 0 ^ s < t, 

u(t-s,X x (s))-u(t,X x ( 0)) = T Xu(t-r,X x (r))a(X x (r))db(r), 

Jo 

and hence, the process M(s) := u(t — s, X x (s)) is a martingale on the interval 
[0, t]. It follows that 

u(t, x ) = E(M(0)) = E(M(t)) = E (u (0, X x (t ))) = E (/ pT(f))) 


where n(0,x) = f(x). For more details on stochastic differential equations see 
Chapter 4. 



OSRAM 
SYLVAN!A 


Light is OSRAM 


We do not reinvent 
the wheel we reinvent 


www.sylvania.com 


light. 


Fascinating lighting offers an infinite spectrum of 
possibilities: Innovative technologies and new 
markets provide both opportunities and challenges. 
An environment in which your expertise is in high 
demand. Enjoy the supportive working atmosphere 
within our global group and benefit from international 
career paths. Implement sustainable ideas in close 
cooperation with other specialists and contribute to 
influencing our future. Come and join us in reinventing 
light every day. 


184 

Download free eBooks at bookboon.com 









Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Example 6. (Quantum mechanical magnetic field). Let a be an appropriate 
vector field on W and let H (a, V) = \ (iV + a ) 2 + V be the (quantum mechan¬ 
ical) Hamiltonian of a particle under the influence of the scalar potential V in 
a magnetic Geld B{x) with vector potential a{x)\ i.e. B = V x a. Let / be a 
function in Co (KQ and let u : [0, go) x Mz/ —► M be a solution to the following 
problem: 

{ r)i\ 

— = —H(a, V)u in [0, oo) x IT; 

u is continuous on [0, go) x R" and u(0,x) = f(x). 

Moreover we want that lim u(t,x ) = 0. Then u(t,x ) = [e z ^/(6(t))], where 

Z(t) = — if a(b(s )) • db(s) — -if V • a(b(s))ds — f V(b(s))ds 
Jo 2 Jo Jo 

V 

with V-d = V ——. Put Mis ) = uit — s, bis)) exp (Z(s)), 0 < s < t. An 

r-j ox* 

3=1 J 

application of Ito’s formula to the function f(s,x,y ) = u(t — s,x) exp(y) will 
yield the following result 


M(s) - M( 0) = f(s, b(s ), Z{s)) - /(0, b( 0), Z( 0)) 

= J ^{a,b(a),Z(a))da + J V x f(a,b(a),Z(a))-db(a) 

+ J ^(a,b(a),Z(p)) • dZ(a) + 1 J A x f(a,b(a),Z(a))da 

+ \ t j, b W- ^ ^ M 

+ | J o %), ^(o’))d (X (<t) 

= J (cr,6(cr),Z(cr))dcr + J V x f(a,b(a),Z(cr))-db(cr) 

+ J /(cr,6(a),Z((7))-dZ(a)+1 J A x f(a,b(a),Z(a))da 

J Q fe(cr),^(cr))a i (6(<r)) da 

^ j ° i=i 

= — — (t — a, 6(a)) e z ^da + (t — a, 6(a)) • d6(a) 


185 


Download free eBooks at bookboon.com 





Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


+ 


f u (t — cr, 6(cr)) e z ^ • dZ(a) + - f A x u (t — a, 6(a)) e z ^da 
Jo 2 J 0 


v r s p 

— i ^ -r— (t — a,b(a)) e z ^aj{b{a)dc 

]=x Jo v x j 


1 

+ 2 


rs v 

j u(t — a, b(a )) e V a,j (fr(cr)) 2 da 

Jo i=i ' 

rr i A 

JoH* +2^ 


u — ia(b(a)).W x u - \a(b(a))\ 2 u 

2 


1 


—-zV • a{b{a))u — V(b(a))u [• (t — a, b(a)) e z ^da 


+ f V x u (t — a, b(<j)) e z ^ ■ db(a) — i f u (t — a,b(a)) e z ^a(b(a)) ■ db(a) 

Jo Jo 

J j-y “ \ (*V + a) 2 - v| u (t - a, b(a)) e z(<T W 

+ f \7 x u (t — a, 6(a)) e z ^ ■ db{a) — i f u (t — a,b(a)) e z ^a(b(a)) ■ db(a) 

Jo Jo 


= f \7 x u (t — a, b(a)) e z ^ ■ db(a) — i f u (t — a,b(cr)) e z ^a(b(a)) ■ db(cr). 

Jo Jo 

Here we used the fact that the function u satisfies the differential equation. The 
claim in the beginning of the example then follows as in Example 4. 


Example 7. A geometric Brownian motion (GBM) (occasionally called expo¬ 
nential Brownian motion) is a continuous-time stochastic process in which the 
logarithm of the randomly varying quantity follows a Brownian motion, also 
called a Wiener process: see e.g. Ross [116] Section 10 . 3 . 2 . It is applicable 
to mathematical modelling of some phenomena in financial markets. It is used 
particularly in the field of option pricing because a quantity that follows a GBM 
may take any positive value, and only the fractional changes of the random vari¬ 
ate are significant. This is a reasonable approximation of stock price dynamics 
except for rare events. 

A stochastic process S t is said to follow a GBM if it satisfies the following 
stochastic differential equation: 

dS(t) = fiS{t) dt + aS(t) dW(t) 

where W(t) is a Wiener process or Brownian motion and // (“the percentage 
drift” or “drift rate”) and a (“the (percentage or ratio) volatility”) are constants. 

For an arbitrary initial value S(0) the equation has the analytic solution 

S(t) = S (0) exp ((/X - y) t + crW(t)) , 

which is a log-normally distributed random variable with expected value given 
by E[<S(t)] = e tlt S{ 0) and variance by Var (S(t)) = e 2 ^S{ 0) 2 (e^ — 1 ^. 


186 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


The correctness of the solution can be verified using Ito’s lemma. The random 


variable log 


is normally distributed with mean (// — |cx 2 ) t and 


variance 


<r 2 f, which reflects the fact that increments of a GBM are normal relative to the 
current price, which is why the process has the name “geometric”. 

Example 8. The term Black-Scholes refers to three closely related concepts: 


1. The Black-Scholes model is a mathematical model of the market for an 
equity, in which the equity’s price is a stochastic process. 

2. The Black-Scholes PDE is a partial differential equation which (in the 
model) must be satisfied by the price of a derivative on the equity. 

3. The Black-Scholes formula is the result obtained by solving the Black- 
Scholes PDE for a European call option. 


Fischer Black and Myron Scholes first articulated the Black-Scholes formula in 
their 1973 paper, “The Pricing of Options and Corporate Liabilities.”: see [19]- 
The foundation for their research relied on work developed by scholars such 
as Jack L. Treynor, Paul Samuelson, A. Janies Boness, Sheen T. Kassouf, and 
Edward O. Thorp. The fundamental insight of Black-Scholes is that the option 
is implicitly priced if the stock is traded. 

Robert C. Merton was the first to publish a paper expanding the mathematical 
understanding of the options pricing model and coined the term “Black-Scholes” 
options pricing model. 

Merton and Scholes received the 1997 The Sveriges Riksbank Prize in Economic 
Sciences in Memory of Alfred Nobel for this and related work. Though ineligible 
for the prize because of his death in 1995, Black was mentioned as a contributor 
by the Swedish academy. 


187 


Download free eBooks at bookboon.com 



Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


7. Black-Scholes model 


The text in this section is taken from Wikipedia (English version). The Black- 
Scholes model of the market for a particular equity makes the following explicit 
assumptions: 

1. It is possible to borrow and lend cash at a known constant risk-free 
interest rate. 

2. The price follows a geometric Brownian motion with constant drift and 
volatility. 

3. There are no transaction costs. 

4. The stock does not pay a dividend (see below for extensions to handle 
dividend payments). 

5. All securities are perfectly divisible (i.e. it is possible to buy any frac¬ 
tion of a share). 

6. There are no restrictions on short selling. 

7. There is no arbitrage opportunity. 

From these ideal conditions in the market for an equity (and for an option on 
the equity), the authors show that it is possible to create a hedged position, 
consisting of a long position in the stock and a short position in [calls on the 
same stock], whose value will not depend on the price of the stock. 

Notation. We define the following quantities: 

- S, the price of the stock (please note as below). 

- V(S,t), the price of a financial derivative as a function of time and 
stock price. 

- C(S, t ) the price of a European call and P(S, t ) the price of a European 
put option. 

- K, the strike of the option. 

- r, the annualized risk-free interest rate, continuously compounded. 

- fj,, the drift rate of S, annualized. 

- <7, the volatility of the stock; this is the square root of the quadratic 
variation of the stock’s log price process. 

- t a time in years; we generally use now = 0, expiry = T. 

- II, the value of a portfolio. 

- R , the accumulated profit or loss following a delta-hedging trading 
strategy. 

- N(x) denotes the standard normal cumulative distribution function, 



1 12 

- N'(x) = , _ e~* x denotes the standard normal probability density 

V 2n 


function. 


188 


Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Black-Scholes PDE. Simulated Geometric Brownian Motions with Parameters 
from Market Data 


In the model as described above, we assume that the underlying asset (typically 
the stock) follows a geometric Brownian motion. That is, 

dS(t) = /nS'(f) dt + crS(t) dW(t ), 

where W(t) is a Brownian motion; the dW term here stands in for any and all 
sources of uncertainty in the price history of a stock. 


The payoff of an option V(S,T ) at maturity is known. To find its value at an 
earlier time we need to know how V evolves as a function of S and T. By Ito’s 
lemma for two variables we have 

m) , t) - *y) ds(t) + ® * + l JdimA d(s, S) (t) 

CD Cl Z C Z D 

+ AlflA + ^ gp '| * 


as 


dt 


d£ 2 


.dV(S(t),t) „ jr , , 
3 rx5(/)- ; dlT(f). 


(3.208) 


Now consider a trading strategy under which one holds a(f) units of a single 
option with value iS'(t) and b(t) units of a bond with value ft(t) at time f. The 
value V ( S(t),t ) of the portfolio of the trading strategy (a(t), 6(t)) is then given 
by 

V (. S(t),t ) = a(t)S(t ) + b(t)(3(t). (3.209) 

Observe that (3.209) is equivalent to 

V ( S(t),t ) — a(t)S(t ) 


6(f) = 


/3(f) 


In addition, a(t) = ^ which i s called the delta hedging rule. Assum- 

ds 

ing, like in the Black-Sholes model, that the strategy (a(t), 6(f)) is self-financing, 
which by definition implies 


dV () = a(f) dS'(t) + 6(t) d/3(t), 


(3.210) 


we get 

dV(t) = dt + b(t) d/3(t) + cra(t)S(t) dW(t). (3.211) 

Assume that the process f > /3(t), he., the bond price, is of bounded varia¬ 
tion. By equating the terms with dW(t) in (3.208) and (3.211) we see a(f) = 

—— ~ ~ Ftoni this and again equating the other terms in (3.208) and 

(3.211) and using (3.209) we also obtain 

( « + 1 M 

- b(t ) dm - (v ( S(t ), t) - g(t) Al . (3,212) 


189 


Download free eBooks at bookboon.com 
















Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


If the interest rate for the bond is constant, z.e., if d/3(t) = rj3(t)dt, or, what 
amounts to the same, (3(t) = /3(0)e rt , then from (3.212) it also follows that 


SV(S(t),t) 1, 2 d 2 V(S(t),t) 
- + ;&W - 


dt 


= rlv(S(t),t)-S(t) 


dS 2 

dV (S(t),t) 
ds 


(3.213) 


If we trade in a single option continuously trades in the stock in order to hold 

8V 

— shares, then at time t, the value of these holdings will be 


U(t) = V(S(t),t)-S(t) 


8V ( S(t),t ) 

Is 


The composition of this portfolio, called the delta-hedge portfolio, will vary from 
time-step to time-step. Let R(t) denote the accumulated profit or loss from 
following this strategy. Then over the time period [t,t + dt], the instantaneous 
profit or loss is 


dR(t) = dV ( S(t),t ) 


dV(S(t),t) 

8S 


dS(t). 


By substituting in the equations above we get 


dR(t) = 


dV(S(t),t) , 1 2c , 2 d 2 V(S(t),t) 

0t + * as 2 


dt. 


This equation contains no dW[t) term. That is, it is entirely risk free (delta 
neutral). Black, Scholes and Merton reason that under their ideal conditions, 
the rate of return on this portfolio must be equal at all times to the rate of return 
on any other risk free instrument; otherwise, there would be opportunities for 
arbitrage. Now assuming the risk free rate of return is r we must have over the 
time period [t,t + dt] (Black-Scholes assumption): 


rll(t) dt = dR(t) 


dv(s(t),t) , i 2c2 en'(S(t),t) 
dt + 2° b dS 2 


dt. 


Observe that the Black-Sholes assumption comes down to the assumption of 
self-financing, because the results If we now substitute in for II(£) and divide 
through by dt we obtain the Black-Scholes PDE: 


8V (S(t),t) 
dt 


i 1 2 q : 

+ -a!S 


2 d 2 V(S(t),t) +rg dV(S(t),t) 


8S 2 


dS 


rV(S(t),t) = 0 . (3.214) 


Observe that the Black-Sholes assumption comes down to the assumption of 
self-financing, because the resulting partial differential equation in (3.213) and 
(3.214) is the same. With the assumptions of the Black-Scholes model, this 
partial differential equation holds whenever V is twice differentiable with respect 
to S and once with respect to t. Above we used the method of arbitrage-free 
pricing (“delta-hedging”) to derive some PDE governing option prices given the 
Black-Scholes model. It is also possible to use a risk-neutrality argument. This 
latter method gives the price as the expectation of the option payoff under a 
particular probability measure, called the risk-neutral measure, which differs 
from the real world measure. 


190 


Download free eBooks at bookboon.com 














Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Black-Scholes formula. The Black-Scholes formula is used for obtaining the price 
of European put and call options. It is obtained by solving the Black-Scholes 
PDE as discussed - see derivation below. 

The value of a call option in terms of the Black-Scholes parameters is given by: 
C{S , t) = C ( S(t),t ) = S(t)N(<h) - Ke- r{T ~ t) N(d 2 ) with (3.215) 

log® + (r + t) ( T ~ *) ,_ 

d\ = - v , - - and d 2 = d\ — aVT — t. (3.216) 

o\/T — t 

The price of a put option is: 

P(S, t) = P ( S(t),t ) = Ke~ r{T -^N{-d 2 ) - S(t)N{-d{). (3.217) 

For both, as above: 

1. N(-) is the standard normal or cumulative distribution function. 

2. T — t is the time to maturity. 

3. S = S(t) is the spot price of the underlying asset at time t. 

4. K is the strike price. 

5. r is the risk free interest rate (annual rate, expressed in terms of con¬ 
tinuous compounding). 

6. a is the volatility in the log-returns of the underlying asset. 



Deloitte 




Discover the truth at www.deloitte.ca/careers 


© Deloitte & Touche LLP and affiliated entities. 



Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Interpretation. The quantities N ( d \) and N (d 2 ) are the probabilities of the 
option expiring in-the-money under the equivalent exponential martingale prob¬ 
ability measure (numeraire = stock) and the equivalent martingale probability 
measure (numeraire = risk free asset), respectively. The equivalent martingale 
probability measure is also called the risk-neutral probability measure. Note 
that both of these are probabilities in a measure theoretic sense, and neither of 
these is the true probability of expiring in-the-money under the real probability 
measure. 


Derivation. We now show how to get from the general Black-Scholes PDE to 
a specific valuation for an option. Consider as an example the Black-Scholes 
price of a call option, for which the PDE above has boundary conditions 

(7(0, t) = 0 for all t 
C(S, t) -> S as 5 — oo 
C(S, T) = max(S' — K,0). 


The last condition gives the value of the option at the time that the option 
matures. The solution of the PDE gives the value of the option at any earlier 
time, E [max(S' — K, 0)]. In order to solve the PDE we transform the equation 
into a diffusion equation which may be solved using standard methods. To this 
end we introduce the change-of-variable transformation 

o 2 

t = T — t, u(x,t ) = C (Ke x -( r ~ ) r ,T — r) e rT , and x = log — + (r-)r. 

V / K 2 

Note: in fact in case we consider a call option we replace V ( S(t),t ) with 

C (S(t),t). Instead of u we may also consider 

v(x,t) = V (Ke x -( r -* a2 ) iT ~ t) ,t) e r(T - f) . 

In case we consider a European call option we take as final value for v: v(x, T) = 
V(Ke x ,T) = C(Ke x ,T ) = max (Ke x — K, 0) = K max (e* — 1, 0). Then the 
Black-Scholes PDE becomes a diffusion equation 

du 1 2 d 2 u 
dr 2 a dx 2 

The terminal condition C(S,T ) = max(,$' — K, 0) now becomes an initial con¬ 
dition 

u(x, 0) = uo(x) = K max ( e x — 1,0). 

Using the standard method for solving a diffusion equation we have 

u 0 (y)e- ix - y)2/i2a2r) dy. 

After some calculations we obtain 


u(x, T ) = 


a 


V2 


7 TT J- 


1 


u(x,t) = Ke x+a2T/2 N (di) - KN (d 2 ) 

where 

. x + o 2 t X 

d\ = -—— and (I 2 ~ — 7 =. 

cryr cr^/r 


192 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Substituting for u, x, and r, we obtain the value of a call option in terms of the 
Black-Scholes parameters is given by 

C(S,t) = SN(di) - Ke~ r{T - t) N(d 2 ), 

where d\ and d 2 are as in (3.216). The price of a put option may be computed 
from this by the put-call parity and simplifies to 

P(S,t) = Ke~ r{:r - t) N(-d 2 ) - SN(-di). 

Risk neutral measure. Suppose our economy consists of 2 assets, a stock and 
a risk-free bond, and that we use the Black-Scholes model. In the model the 
evolution of the stock price can be described by Geometric Brownian Motion: 


dS ( t ) = /rS ( t ) dt + crS ( t ) dW ( t ) 

where W ( t ) is a standard Brownian motion with respect to the physical measure. 
If we define 


W(t) = W(t ) + 


Girsanov’s theorem states that there exists a measure Q under which W(t) is a 
standard Brownian motion, i.e., a Brownian motion without a drift term and 


such that Eq 
theorem, whic 
4.24 in Chapter 4 Section 3. The quantity 


u is m 


W(t) 2 = t. For a more thorough discussion on the Girsanov’s 
act (much) more general, see assertion (4) in Proposition 

is known as the market price 


H — r 


of risk. Differentiating and rearranging yields: 


dW(t) = dW(t ) - -—- dt. 

G 

Put this back in the original equation: 

dS(t) = rS(t) dt + <rS(t) dW(t). 


The probability Q is the unique risk-neutral measure for the model. The (dis¬ 
counted) payoff process of a derivative on the stock H(t) = Eq (H(T) | T t ) is 
a martingale under Q. Since S and H are Q-inartingales we can invoke the 
martingale representation theorem to find a replicating strategy - a holding of 
stocks and bonds that pays off H(t) at all times t < T. The measure Q is given 
by <5(^4) = E [e -z ( T )l^], A e where 



y-r 

G 


w(t). 


In fact a more general result is true. Let s 

r'T 


that E 


exp 


\h(s)\ 2 ds 


< GO. Put 


h(s) be a predictable process such 


Zh(t) = f h(s)dW(s) + ^ f \h(s)\ 2 ds. 

Jo 2 Jo 

Define the measure Qh by Qh(A) = E[e -Zh ^l^], A e Put 114(t) = 
W(t) + h(s) ds. Then the process 114 is a Brownian motion relative to the 
measure Qh- The proof of this result uses Levy’s characterization of Brownian 


193 


Download free eBooks at bookboon.com 











Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


motion: see Corollary 4.7. It says that a process Wh is a Q/,-Brownian motion 
if and only if the following two conditions are satisfied: 


(1) The quadratic variation of Wh satisfies (Wh, Wh) (t) = t. 

(2) The process Wh is a local martingale relative to the measure Q/- ( . 


(For a proof of this result see Theorem 4.5.) In our case we have (Wh, Wh) (t) = 
{W, W) ( t ) = t, and so (1) is satisfied. In order to establish (2) we use Ito 
calculus to obtain: 


e~ Zh ®W h (t) 


f e~ Zh{s) dW(s) - f 

Jo Jo 


Zh{s) W h (s)h(s)dW(s). 


Since the process t ^ e Zh(t> is a martingale we see that the process IF/, is a 
local Q/r m artingale. 


We like to spend more time on the Black-Sholes model and the corresponding 
risk-neutral measure. Again we have trading strategy ( a(t),b(t )) of a financial 
asset and a bond. Its portfolio value V(t) := V ( S(t),t ) is given by V(t) = 
a(t)S(t ) + b(t)/3(t). Here S(t) is the price of the option at time t and (3(t) 
is the price of the bond at time t. It is assumed that the process t > S(t) 
follows a geometric Brownian motion: dS(t) = nS(t) dt+aS(t) dW(t), or S(t) = 

S( Let S(t) be the discounted price of the option, i.e., 


m 


m 

f3(t) 


S(t). 


(3.218) 


Put W(t) = W(t) + q(s ) ds, where 


q(s) 



Then the process S(t) satisfies the equation 
dS(t) = cr ^}^} S(t)d (— f (fi — ^ ds + IF^)^ = <rS(t) dW(t). (3.219) 

PW V^Jo V p(s )) ) 

Put Z q (t ) = I q(s)dW(s) + - j q(s) 2 ds. By Girsanov’s theorem the process 

^ J° 2 Jo 

t i—> W(t) is a (standard) Brownian motion under the measure Q q given by 
Q q (A) = E [e _z 9( T )i A ] ; A e Tt- The solution S(t) of the SDE in (3.219) can 
be written in the form 


S(t ) = S( 0)e aW(t) -^ H . 

Assume that the portfolio is self-financing we will show that 

(3(t) 


V(t) = 


m 


h(S(T))\5 t 


t e [0, T], 


(3.220) 

(3.221) 


where V(T) is equal to the contingent claim h (S(T)) at the time of maturity 
T. Of course, E® 9 [F | T t ] denotes the conditional expectation of F relative 
Q q , given the a-field = a (W(s) : s ^ t) of the variable F e L 1 (0, 5Fr,Qg) 
with respect to the probability measure Q q . Another application of Ito’s lemma 


194 


Download free eBooks at bookboon.com 









Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


together with the definition of S(t), W(t) and V(t) = ^ ^-V(t) shows the 


following result 

dv(t)= mm 


Pit) 


V(t) dt + dV(t) 


Pit) 2 /3(t) 

(the hedging strategy ( a(t),b(t )) is self-financing) 

= ~ v ( f ) dt + dS(t) + b(t)d(3(t)) 

(employ the equation for the option price Sit)) 

)P ( ) y(f^ dt _|_ V 1 s(t) fiia(t ) dt + b(t)^-y -y dt + adW ( t ) 


m 2 

Pi® )P'(t) 
Pit) 2 
Pi 0 ) 


+ 


Pi*) 

( a(t)S(t ) + b(t)/3(t )) dt 

m 


pit) 


pp) 

MO) 


S(t) ( jia{t) dt + b{t ) t— 7 dt + adW(t) 


aa(t) t- : 5'(t) d -j — 


Pit) 

= a ait) S(t) dWit). 




Pit) 

P'i 3 ) 

Pis) 


ds + W(t) 


(3.222) 


SIMPLY CLEVER 


SKODA 



We will turn your CV into 
an opportunity of a lifetime 


Do you like cars? Would you like to be a part of a successful brand? 
We will appreciate and reward both your enthusiasm and talent. 
Send us your CV. You will be surprised where it can take you. 


195 


Send us your CV on 

www.employerforlife.com 



Download free eBooks at bookboon.com 
























Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


In fact the equality in (3.222) could also have been obtained by observing that 
d,V(t) = a(t) dS(t), and dS(t) = adW(t). (3.223) 

From (3.222) we infer 


V(t) = V (0) + <7 f a(s)dW(s), 

Jo 


(3.224) 


and hence, the process t <-*■ V(t) is a martingale with respect to the measure 
Q , q . So from (3.224) we get 

m- 


p(t) 

and hence 


V(t) = V(t) = 

Pit) 


V(T) % 


= £^9 


m 

P(T) 


V{T) % 


V(t) = E Q « 


V(T) I % 


vm 

In addition, we observe that 

P(T) e -±* 2 (T-t)+a(W(T)-W(t)) 


Pit) 

im 


h(S(T))\% 


(3.225) 


(3.226) 


= S (t) exp 


f( 


P\s) 1 2 
p{s) 2 a 


ds + a 


( W(T ) - W{t )) 


S(0) exp ( crW(t ) + ( /i 


-cr 2 1 t 


exp 


f( 


1 


_ - 2 

Pis) 2 


ds + CT 


(w(T) - W(t) + | q(s)ds 


= S( 0) e ( ti - 1 ^ 2 ) T+aW{T) = S(T). 

Inserting the equality for S(T) from (3.227) into (3.226) yields 


V(t) = 


Pit) 

PiT)‘ 


h f S(t) e -^ 2 (T-t)+a(w(T)-W(t)) 






(3.227) 


(3.228) 


Since the variable S(t) is measurable with respect to %, Since process t > W(t) 
is a Q^-Brownian motion, the variable W(T ) — W(t) and the cr-field 5) are Q q - 
independent. Moreover, the process t >—> (3(t) is supposed to deterministic. 
Hence, since the variable S(t) is measurable with respect to we deduce that 

V(t) = V(S(t),t) 

1 , — . — ' ■ (3.229) 


a/ 27f J— 


r &L h ( x Pi^l e -^(T-t)+a^T=iy\ -\y* > 

J-oc p(t) ^ m ) y 


I x=S(t) 


Hence if the pay-off, i.e. the value of the call option at expiry (time of maturity 
T), is given by h(S(T )) = max {S(T) — K, 0}, then the value of the portfolio 
at time t < T is given by the formula in (3.229). If (3(t) = f3(0)e rt , then this 
integral can be rewritten as in (3.215) with C (S, t) = C (. S(t),t ) = V ( S(t),t ) = 
V(t). Similarly, if h ( S(T )) = max {K - S(T), 0}, then P (S, t) = P ( S(t),t) = 
V ( S(t),t ) = V(t) is the price of a European put option: see the somewhat 


196 


Download free eBooks at bookboon.com 
























Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


more explicit expression in (3.217). For a modern treatment of several stock 
price models see, e.g., Gulisashvili [ 60 ]. 


8. An Ornstein-Uhlenbeck process in higher dimensions 


Part of this text is taken from [ 146 ]. Let C (t, s), t ^ s, t, s e E, be a family of 
d x d matrices with real entries, with the following properties: 


(a) C(t, t) = I, t e E, (/ stands for the identity matrix). 

(b) The following identity holds: C(t, s)C(s, r) = C(t. t) holds for all real 
numbers t, s, r for which t ^ s ^ r. 

(c) The matrix valued function (t, s, x ) > C(t , s)x is continuous as a func¬ 

tion from the set {(fsjeiGlh t ^ s} x R d to R d . 


Define the backward propagator Yc on Cb (R d ) by Yc(s,t)f(x ) = f (C(t, s)x), 
x e E d , s < t, and / e Ci„ (E d ). Then Yc is a backward propagator on the space 
Cb (R d ), which is a (Cb (E d ) , M (E d ))-continuous. Here the symbol M (E d ) 
stands for the vector space of all signed measures on E'k The operator family 
{Fc(s, t) : s < t} satisfies Y c (si, s 2 ) Y c (s 2 , s 3 ) = Y c (si, s 3 ), Si < s 2 < s 3 . 


Let W(t) be standard m-dimensional Brownian motion on (f2,T t ,P) and let 
a(p) be a deterministic continuous function which takes its values in the space 
of d x m-matrices. Put Q(f>) = a(p)a(p)*. Another interesting example is the 
following: 


Yc,q ( s,t ) f(x) 

= ^d /2 J e ~^ lv]2 f s)a; + Q C(t,p)Q(p)C(t,p)*dp S j 


= E 


/ 


C(t,s)x + f C(t,p)a(p)dW(p) 

J S 



dy 


(3.230) 


where Q(p) = a(p)a(p)* is a positive-definite d x d matrix. Then the propaga¬ 
tors Yc,q and Yq,s are backward propagators on Cb (E d ). We will prove this. 
The equality of the expressions in (3.230) is a consequence of the following ar¬ 
guments. Let the variable £ 6 E d have the standard normal distribution. Fix 
t Y t. Both variables 

t ^ t, and 

t Y r, (3.231) 


X T,x (t) := C (t,r) x + J C (: t , p) <r(p)dW(p), 
C(t, t)x + ( f C(t, p)Q(p)C(t, p)*dp\ £, 


are E d -valued Gaussian vectors. A calculation shows that they have the same 
expectation and the same covariance matrix with entries given by (3.242) below 
with s = t. 


Next suppose that the forward propagator C on R d consists of contractive op¬ 
erators, i.e. C(t, s)C(t, s)* ^ I (this inequality is to be taken in matrix sense). 


197 


Download free eBooks at bookboon.com 





Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Choose a family S(t,s) of square d x d-matrices such that C(t, s)C(t, s)* + 
S (£, s) S (t, s)* = /, and put 


Y c ,s(s,t)f(x ) 



|y|2 / (C(t, s)a; + Sft, s)y) dy. 


(3.232) 


In fact the example in (3.232) is a special case of the example in (3.230) provided 
Q(p) is given by the following limit: 


Q(p) = lim 

HO 


I-C(p-h)C(p-hY 
h 


(3.233) 


If Q(p) is as in (3.233), then 

S (t, s ) S (■t , s)* = I -C (t, s) C (■ t , s)* = f C (t, p) Q(p)C (t, p)* dp. 

J s 

The following auxiliary lemma will be useful. Condition (3.234) is satisfied if the 
three pairs (C x , S x ), {C 2 , S 2 ), and (C 3 , S 3 ) satisfy: C^+S^* = C 2 C*+S 2 S* = 
CaC'l + S 3 S% = I. It also holds if C 2 = C {t 2 ,ti), and 

S 3 S* = f C (tj,p)a(p)a(p)*C (tj,p)* dp, j = 1,2, and 

Jtj -1 

*5.3*S'! = f C (t 2 ,p)a(p)a(p)*C (t 2 ,p)* dp. 




MAERSK 


I joined MITAS because 
I wanted real responsibility 


The Graduate Programme 
for Engineers and Geoscientists 

www.discovermitas.com 


Real work 
International opportunities 
Three work placements 




a 


I was a construction 
supervisor in 
the North Sea 
advising and 
helping foremen 
solve problems 


198 

Download free eBooks at bookboon.com 










Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.89. Lemma. Let C\, S x , C 2 , S 2 , and C 3 , S 3 be dxd-matrices with the following 
properties: 



C3 = c 2 c u 

and 

q c<* , q n* n rr* 

02010^02 ~r 02^2 — O3O3 . 

(3.234) 

Let feC b ( R d ) 

, and put 





Y lt2 f(x) = 

1 

(27T) d/2 

J e“^ |y|2 / (Cl® + 3 iy) dy; 

(3.235) 


Y 2 , 3 f(x) = 

1 

(27T) d/2 

J e _ ^ |y|2 / (C 2 x + S 2 y)dy; 

(3.236) 


Yi,sf(x) = 

1 

( 2 yr ) d/2 

J e~^ lyl2 f (C 3 x + S 3 y)dy. 

(3.237) 

Then W,2^2,3 = 

Y lt 3. 





PROOF. Let the matrices Cj and Sj, 1 C j < 3, be as in (3.234). Let 
f g C b (R d ). First we assume that the matrices S x and Co are invertible, and 
we put A 3 = S’7 1 Cf 1 S3 , and A 2 = Sf 1 Cf 1 So . Then, using the equalities 
in (3.234) we see A 3 A 3 = I + A 2 A;i;. We choose a d x d-matrix A such that 
A* A = I+A 2 A 2 , and we put D = (A~ 1 )* A 2 A 3 . Then we have A 3 A 3 = I+D*D. 
Let / e C b (M d ). Let the vectors (yi,y 2 ) e x and (y, z)eR J x be such 
that 



(3.238) 


Since 

A 2 A 2 (/ + A 2 A 2 ) = A 2 (1 + A 2 A 2 ) A|, 


we obtain 

det (/ + A 2 A*) = det (/ + A*A 2 ). 


Hence, the absolute value of the determinant of the matrix in the right-hand 
side of (3.238) can be rewritten as: 


det 


A 3 —A 2 A 
0 A - 1 


-1 


= | det A 3 (det A) 


-11 2 


det (A 3 A 3 ) det (/ + A 2 A£) 
det (A*A) det (d -I- A^A 2 ) 


= 1 . 


(3.239) 


From (3.238) and (3.239) it follows that the corresponding volume elements 
satisfy: dtj\ dy 2 = dydz. We also have 

|di | 2 + 1 2 / 2 1 2 = \y \ 2 + \z~ Dy \ 2 . (3.240) 


Employing the substitution (3.238) together with the equalities diji dy 2 = dy dz 
and (3.240) and applying Fubini’s theorem we obtain: 

Yi, 2 Y 2 y 3 f(x) = JJ e - K | w i| 2 + l» 2 | 2 ) j (C 2 C 1 X + C 2 Siy x + S 2 y 2 ) dy x dy 2 
= (2JJ e_ K |2/|2+|z " jD2/|2 )/((F 3 x + S 3 y ) dydz 


199 


Download free eBooks at bookboon.com 











Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


= nfjw J + d V = (3-241) 

for all / 6 Cb (M d ). If the matrices Si and C-i are not invertible, then we 
replace the C\ with C\ )£ = e~ £ C\ and ,5'i ;£ satisfying Ci t£ C* f + Si j£ S* e = /, 
and lim e |o<S'i, e = -S'i- We take 62 ,e = instead of S 2 . In addition, we 

choose the matrices C 2 , e , s > 0, in such a way that £ + £ = /, and 

hm £ |o C2,e = C *2 • 

This completes the proof of Lemma 3.89. □ 


We formulate a proposition in which an Ornstein-Uhlenbeck process plays a 
central role. Here p •—* a(p) is a deterministic square matrix function, and 
< 2 (p) = 

3.90. Proposition. Put X T,x (t ) = C (t, t)x + C (t, p) a(p)dW(p) . Then the 
process X T,x (t) is Gaussian. Its expectation is given by E[X T,a: (t)] = C (t,r)x, 
and its covariance matrix has entries (s, t 5 = r 

J C( S ,p)Q(p)C(t,p)*dpJ (3.242) 

Let {(O, 9\ P T;X ), (X(t), t ^ 0 ), (M d , 23 d )} 6 e the corresponding time-inhomogen- 
eous Markov process. By definition, the P -distribution of the process t « 
X T,x (t), tp r, is the P T)a: - distribution of the process 1 1 —> X(t), t ^ r. Then this 
process is generated by the family operators L(t), t 5= 0, where 

d 


mm - \ 2 QiAt)DjD k f(x) + (Vf(x),A(t)x). 
z j,k =1 

Here the matrix-valued function A(t ) is given by A(t ) = hm 
The semigroup e sL<yt \ s ^ 0, is given by 


(3.243) 

C(t + h,t) — I 
h ’ 


e sL{t) f(x) 


= E 


/ + J e (s - p) ^ (t) (j(t)dW(p) 

^ J | e ^) I+ (JeM( t )g (t ) e MWg p j ' y \ dy 


(2tt) 

= f P (^, n?, ?/; t) /(y)dy 


(3.244) 


where, with QA(t )( s ) = e pA ^Q(t)e pA ^* dp, the integral kernel p(s,x,y,t) is 

Jo 

given by 


P(s,x,y,t ) 


( 2 vr ) d/2 (detg A(t) (s)) 


d/2 


- § ( (QA(t) (s)) 1 ( y-e sA ^x ) ,y-e aA toxy) 


200 


Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


If all eigenvalues of the matrix A(t ) have strictly negative real part, then the 
measure 

B ^ \jdJ 2 Jc _ ^ l ^ /|2 1b (J e pA{t p{t)e pA{t) *dpfj dy 

defines an invariant measure for the semigroup e sL ^\ s ^ 0. 


A Markov process of the form {(f2, P T>a; ), (X(t),t ^ 0), (M d ,!B R d)} is called 
a (generalized) Ornstein-Uhlenbeck process. It is time-homogeneous by putting 
C(t, s ) = e ~d-s)A^ w j iere ^4 is a square d x d-matrix. We will elaborate on the 
time-homogeneous case. In this case we write, for x, b e M d . 

r-t 


S{t)f{x) := E 


/ 


e~ tA x + (I- e~ tA 


)6+ r 

Jo 


0 -(t-s)A 


a dB(s) 


(3.245) 


where / : M. d —> C is a bounded Borel measurable function. If / belongs to 
Co (E d ), then S(t)f does so as well. For brevity we write 


X x (t) = e 


~ tA x + (I - e~ tA 


)» + f 

Jo 


0 -(t-s)A 


adW(s). 




Because achieving your dreams is your greatest challenge. IE Business School’s Master in Management taught in English, 
Spanish or bilingually, trains young high performance professionals at the beginning of their career through an innovative 
and stimulating program that will help them reach their full potential. 

Choose your area of specialization. 

Customize your master through the different options offered. 

Global Immersion Weeks in locations such as London, Silicon Valley or Shanghai. 

Because you change , we change with you . 


www.ie.edu/master-management mim.admissions@ie.edu f # In YwTube ii 


Master in Management • 



Download free eBooks at bookboon.com 














Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


It also follows that for such functions lim^o S(t)f(x) = f(x) for all x e R d . 
Since we also have the semigroup property S (ti + t 2 ) f = S (ti) S (t 2 ) f for all 
ti, t 2 ^ 0 , it follows that the semigroup t >—► S(t) is in fact a Feller semigroup. 
Theorem 3.37 implies that there exists a time-homogeneous Markov process 

{(fi,T,P x ) xeRd ,(X(t),t> 0), (d t ,t ^ 0), (R d , B R d)} 

such that for a bounded Borel function / we have 

Ex [/ (*(*))] = E [/ (X x m = S(t)f(x), X e R d . (3.246) 

Nest we prove the semigroup property. First we observe that, for x e M. d and 
*i, t 2 > 0, 

X s (ti + t 2 ) = e~ t2A X x (ti) + (I - e~ t2A ) b+ f * e~^~ s)A a dW (s + h ). (3.247) 

Jo 

Let (Vt w be the probability space on which the process t >—► W(t) is a 

Brownian motion. Let (ff) l ) />0 be the internal history of the Brownian motion 
{W(t) : t ^ 0 }, so that = a (VF(s) : s < t). Then by the equality in (3.247) 
we have 


E [/ (X* (h + t 2 )) | 3%] 


(3.248) 


= E 


f (^e~ t2A X x (h) + (I- e ~ t2A ) b + J 2 e-( t2 ~ s)A adW (s + h) 


rrW 


We employ the fact that the state variable X x ( t \) is J^-measurable, and that 


rti+t 2 

l 


e -(ti+t 2 -s)A (J 


dW(s ) = f 2 e- (f2 - s)A ad{W{s^t 1 )-W{t 1 )} 

Jo 


is P-independent of T))' 7 and possesses the same P-distribution as the variable 
J* 2 e~^ t2 ~ s ^ A a dW (s) to conclude from (3.248) the following equality: 

E [/ (X* (h + t 2 )) | T* 7 ] 


= E 


f (e~ t2A z + (I- e ~ t2A ) b + J 2 e~ {t2 ~ s)A adW (s) 


\z=X x (ti) 


= K[f(X‘ (i 2 ))] |„ x , (ti) • (3,249) 

From (3.249) it follows that the process t <—>■ X x (t) is a Markov process and 
that, by the definition of the operators S(t), t 5= 0, 

S(t 1 + t 2 )f(x)=E[f(X x fa +t 2 ))] 


E 


E [f (X z (t 2 ))] I 


Z=X*(t!) 


E [S(t 2 )f(X x fa))] 

= S (ti) S (t 2 ) f(x). (3.250) 

We calculate the differential dX x (t ) and the covariation process (X ^, X x 2 y (t): 

dX x (t) = —A(X x (t) — b) dt + a dW(s), and (3.251) 

{X x v X x 2 ){t) = \ (e- sA aa*e- sA *) ds = cov (X? (*),X?(f)) . (3.252) 

Jo V ' 31,32 


202 


Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


In other words the process t >—* X x (t) satisfies the equation 


X x (t)=x + f A(b-X x (s )) ds+ f odW(t 

Jo Jo 


(3.253) 


Since its covariation is deterministic we have that the covariation coincides with 
its covariance: see (3.252). Let / : R. d —> C be a bounded continuous function 
with bounded and continuous first and second order derivatives. Next we apply 
Ito’s lemma, and employ (3.251) and (3.252) to obtain 

f (X x (t)) - f (. X*(0 )) = f V/ (X x (s)) • {-A (X x (s) -b)}ds 

J 0 

d rt 


+ 2 


+ 


) t, f D h D h f(X’( S ))(e- A aa’e- A ’) d S 

2 h,h- i Jo n ' n 

f X7f(X x (s)) -adW(s). (3.254) 

Jo 

Upon taking expectations in the right-hand and left-hand sides of (3.254), using 
the fact that the stochastic integral in (3.254) ia a martingale, and letting t [ 0 
shows: 

L A m := Um S -AM - m _ lim 1 

J v ’ no t no t 

-< d 

= -(A(x-b))-Vf(x) + - 2 (3.255) 

31,32=1 


In the following proposition we collect the main properties of the time-homo¬ 
geneous Ornstein-Uhlenbeck process t >—> X x (t). It is adapted from Proposition 
3.90. In adition, a = a(p) is independent of p. 


3.91. Proposition. Put X x (t) = e~ tA x + (/ - e~ tA ) b + f 0 e~ (t ~ p)A adW(p). 
Then the process X x (t ) is Gaussian. Its expectation is given by E[dP(t)] = 
e~ tA x + (/ — e~ tA ) b, and its covariance matrix has entries 

.in (s,t) \ 

e -(s- P )A a(T *e-(t- P ) A * df) (3.25 6 ) 

' 31,32 

Let {(U, lb, P T;X ), (X(t), t ^ 0), (M d , 23 d )} be the corresponding time-inhomogen- 
eous Markov process. By definition, the P -distribution of the process t > X x {t), 
t ^ t, is the P x -distribution of the process t > X(t), t ^ 0. Then this process 
is generated by the operator La, t ^ 0, where 

1 d 

LaJ(x) = - J] (ar) jiJi D JI D J J(x)-{Vf(x),A(x-b)). (3.257) 

31,32 = 1 


/ r 

P-cov(A-( s ),XJ(t)) = M 


The semigroup e sLA , s ^ 0, is given by 


0 sL a 


f(x) 


= E 


f (^e- sA (x - b) + b + J e-( s ~ p)A a dW{p) 


203 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


= ^ ^ J e ^ ly|2 f aA ( x ~b) + b+ e pA ov*e pA * dp^j yj dy 
= J p A (s,x,y) f{y)dy (3.258) 

where, with Qa(s ) = e~ pA aa*e~ pA * dp, the integral kernel pa ( s,x,y ) is given 

Jo 

by 


Pa ( s,x,y ) 


3 ( - !(( < 2'4( s )) 1 (y~ e sA (x-b)-b),y-e sA (x-b)-b )) 


(2Tv) d/2 (det Q A (s)) d/2 
If all eigenvalues of the matrix A have strictly positive real part, then the measure 

B h- > -— ^yi /2 j e~^ 1 b (^> + J" e~ pA aa*e~ pA * y dp'j dy (3.259) 

defines an invariant measure for the semigroup e sLA , s ^ 0. 


Proof. The results in Proposition 3.91 follow more or less directly from 
those in Proposition 3.90. The result in (3.259) follows by letting s — > go in 
the second equality of (3.258) or in the definition of the probability density 
Pa (s, x, y). □ 


For more information about invariant, or stationary, measures see, e.g., [146] 
(Chapter 10) and the references therein like Meyn and Tweedie [97]. 

In order to apply our results on the Ornstein-Uhlenbeck process to bond pricing 
and determining interest rates in financial mathematics the identities and results 
in the following proposition are very useful. It will be applied in the context of 
the Vasicek model. 


3.92. Proposition. Let the notation and hypotheses be as in Proposition 3.91. 
Put 


Aft, T) = J e~ pA dp = | e~ ps ds = A -1 (/ - e ~ {T - t)A ), 


0 <t^T, 


where the last equality is only valid if A is invertible. Let y be a vector in M. d . 
The following assertions hold true. 


(1) The following identity is true for 0 < t < T: 


E 


J X x {s)ds = A(t,T){X x (t)-b) + {T-t)b + J A (p, T) a dW(p). (3.260) 

(2) The random vector X x (s) ds is Gaussian (or, what is the same, mul¬ 

tivariate normally distributed) with conditional expectation given by 

J X x (s)ds\3 r t =E J X x (s)ds\X x (t) = Aft, T) (X x (t) — b) + (T—t)b, 

(3.261) 


204 


Download free eBooks at bookboon.com 









Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


and covariance matrix given by (1 ^ ji, j 2 ^ d) 


cov 


| X^s)ds\X x (t),j t X x 2 (s)ds\X x (t ) 
f A (p, T) aa*A (p, T)* dp\ . (3.262) 

Jt / n i do 


(3) TTie random variable (y , X x (s) dsj is normally distributed with con¬ 


ditional expectation given by 
r T 

E (y, 


X x (s)dsj\? t = E (y,^ X x (s)dsJ\X x (t) 


= (y,A(t,T) (X x (t)-b)) + (T-t) (y, b ), 
and variance given by 


(3.263) 


V, \t I = \<r*A(p,T)*yf dp. (3.264) 

(4) The conditional expectation of exp (y, ^ X x (s) ds ^ given is log¬ 
normal, and 


E 


exp - < y 


/,j> 


s) ds ) ) 35 


= E 


exp 


y,| T Xk5)ds^ | X x (t) 


exp <y, A(i, T) (X x (t) - b )) — (T — t) (y, b)+ l - £ \<j*A (p, T)* yf dp) . 

(3.265) 



205 


Click on the ad to read more 


Download free eBooks at bookboon.com 
















Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Proof. (1) From (3.247) we see, for s ^ t, 

A x (s) = e~ {s ~ t)A (X x (t) - b) + b + | e~ {s - p)A a dW (p ). (3.266) 

Then we integrate the expressions in (3.266) against s for t < s < T, and we 
interchange the integrals with respect to ds and dW(p) to obtain the equality 
in (3.260). This proves assertion (1). 


(2) Although the process t >—> A (p, T ) dW (p) is not a martingale, it has 
enough properties of a martingale that its expectation is 0, and that its quadratic 
covariation matrix is given by the expression in (3.262). The reason for all this 
relies on the equality: 

f A (p, A) dW(p) = f T A (p, T) dW(p) - f A(p,T) dW(p ), (3.267) 

Jt Jo Jo 

combined with the fact that the process t >—► r A (p, T ) dW (p) is a martingale. 
So we can apply the Ito isometry and its consequences to complete the proof 
of assertion (2). An alternative way of understanding this reads as follows. 
Processes of the form s •— * X x (s), s < 0, and t •—* X x (s) ds, 0 ^ t ^ T, 
consist of Gaussian vectors with known means and variances. For s ^ t we use 
the representation in (3.266) for A x (s), and for ^ X x (s)ds we employ (3.260). 

(3) The proof of this assertion follows the same line as the proof of the assertion 
in (2). 


(4) If the stochastic variable Z is normally distributed with expectation p and 
variance v 2 = E [(Z — p) 2 ], then E [e z ] = e fi+ z v . This result is applied to the 

variable Z = — (y, ^ X x (s) ds ^ to obtain the equality in (3.265). 

This completes the proof of Proposition 3.92. □ 


3.93. Lemma. Let the notation and hypotheses be as in the proposition 3.91 and 
3.92. Suppose that the matrix A is invertible. The following equality holds for 
0 < t < T: 


A (P, T ) (P, T )* dp 


= (T - t)A~ l aa* (A*) -1 - A ( t, T ) A -1 era* (A*) -1 - A -1 a a* (A*) -1 A ( t, T)* 

- T-t 


1 —1 _ ( A*\ — 1 A —1 ^ ( A*\ — 1 


+ 


f e~ pA A~ 1 <7<j* (A*)" 1 e“ pA * dp. 

Jo 


(3.268) 


If the invertible matrix A is such that Aaa* = aa*A*, then the following equality 
is valid for 0 < t < T: 

r T 


| A (p, T) aa*A (p, T)* dp 


1 


= {T- t)A~ l aa* (A*)" 1 - A (t, T ) A~ 1 aa* (A*)" 1 - - (A(t, T)f oa* (A*) -1 


206 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


= (t -t- A(t,T) - A (A(t, T)) 2 ) A- 1 a a* (A*) -1 


(3.269) 


Observe that an equality of the form Ago* = go* A* holds whenever A = A* 
and the matrix 00* is a “function” of A. In particular this is true when d = 1 
and A = a is a real number. 


PROOF. Since A is invertible we have A (p, T) = (I — e ( T A" 4 ) A and 


so 


A (p, T) gg* A (p, T)* dp 

= | T (I - e~^ A ) A~ 1 oo* (A*)- 1 (/ - dp 

= J (I- e~ pA ) A~ l oo* (A*) -1 (i - e~ pA *^J dp 


= {T - t)A~ l oo* (A*) -1 - A ( t, T ) A -1 era* (A*) -1 - A~Vct* (A*) -1 A (t, T ) H 

- T-t 

(3.270) 


l —1 _ ( /I * \ — 1 A—( A*\ — 1 


+ f e- pA A~ l oo* {A*)- 1 e~ pA * dp. 

Jo 


The hnal equality in (3.270) proves (3.268). Next we also assume that Ago* = 
go*A*. Then oo*e ~ pA * = e~ pA oo*, and hence 


rT-t 

e- pA A~ 1 GG*{A*)- 1 e- pA * dp 

Jo 


"T-t 




= f e 2pA dp A 1 aa*(A*) ~e 

Jo 

= - (/ - e 2{ - T ~ t)A ) A~ 2 oo* (A*)- 1 e ~ pA *. 


A simple calculation shows 


l {I- e- 2(T “ tM ) = A(t, T)A - i (A(f, T)f A 2 , 


(3.271) 


(3.272) 


and so the equalities in (3.271) show 

f T ' e- pA A~ X GG* (A*)' 1 e~ pA * dp = C ' e~ 2pA dpA^aa* (A*)' 

Jo Jo 


= ( A(t, T)- 1 - (AT T)f A ) A~ 1 oo* (A*) -1 . 


(3.273) 


A combination of (3.270) and (3.273) together with the equality gg* A (t, T)* = 
A (t, T ) gg* then yields the equality in (3.269), completing the proof of Lemma 
3.93. □ 


Before we discuss the Vasicek model we insert Girsanov’s theorem formulated 
in a way as we will use it in Theorem 3.101. In fact we will formulate it in a 
multivariate context. 


207 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.94. Theorem. Let { X(t ) : 0 < t < t} be an ltd process satisfying 
dX(t) = v(t) dt + u(t) dW(t). 0 < t < T. 

Suppose there exists a process {9(t) : 0 < t < T), with the property that 

i2 


P 


fl^)l 

_Jo 


dt < oo 


1, 


such that the process v(t ) — u(t)9(t) has this property as well. Assume further¬ 
more that the process t *—» £(t), 0 ^ t < T, defined by 


£(t) = exp 0(s) dW(s) — ^ 


ds 


(3.274) 


is a V-martingale, which is guaranteed provided E [£(£)] = 1 for 0 < t < T. 

dP* 

Define the measure P* such that = £(T). Then 

t~W*(t) :=W(t)+ f 9{s)ds, t 6 [0, T], 

Jo 

is a Brownian motion w.r.t. P* and the process {X(t) : 0 < t < T} has a rep¬ 
resentation w.r.t. W*{t) given by 

dX(t) = (v(t) — u(t)9(t)) dt + u(t ) dW*(t). 



AACSB 


ACCREDITED 


Excellent Economics and Business programmes 

'\&r 


university of 
groningen 




www.rug.nl/feb/education 


.uHr 

p W 

“The perfect start 
of a successful, 
international career.” 


CLICK HERE 

to discover why both socially 
and academically the University 
of Groningen is one of the best 

places for a student to be 


208 

Download free eBooks at bookboon.com 







Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


We shortly show that {£(£): 0 ^ t ^ T) is a diffusion process. Set Y(t) 
0(s) dW(s), 0 < t < T, and consider the function f(t,x ) 6 (7 2 ([0, T], 
defined by 


f(t,x) = exp ^ J |d(s)| 2 ds^j 


Then we clearly have that £(£) = f (t,Y(t)). By Ito’s formula we have 


dE[t ) 


\9(t)\ 2 E(t) dt - E{t)9(t) dW(t) + i E(t)d(Y,Y) (t) 

Zj 


= — \6(t)\ z E(t) dt — E(t)9(t ) dW(t) H—£(£) |d(t)p dt 
2 2 


= -9(t)E(t)dW(t). 

Hence, it follows that 


(3.275) 


E(t) = £(0) — f 0(s)E(s)dW(s), 

Jo 


which in general is a local martingale for which E [£ (7)] < 1. It is a sub¬ 
martingale, but not necessarily a martingale. If, for 0 < t < T, the expecta¬ 
tion E [£(£)] = 1, then t i— > £(£), 0 < t < T, is a martingale. If Novikov’s 


condition, 


i.e.. 


if E 


exp I | \9{t)\ 2 dt j < go is satisfied, then the process 


{£(£): 0 < t < T} is a martingale. For details on this condition, see Corollary 
4.27 in Chapter 4. For more results on (local) exponential martingales see sub¬ 
section 1.3 of Chapter 4 as well. In section 3 of the same chapter the reader may 
find some more information on Girsanov’s theorem. In particular, see assertion 
(4) of Proposition 4.24 and Theorem 4.25. 


8.1. The Vasicek model. In this subsection we want to employ the results 
in Proposition 3.92 with d = 1 to find the bond prices in the Vasicek model. 
Until now we were always working in the physical probability space (12, T, P). 
In order to calculate the fair price of a financial instrument one often uses the 
method of risk-neutral pricing. Through this technique the price of a financial 
asset is the expectation of its discounted pay-off at the so-called risk-neutral 
measure Q. The risk-neutral measure is equivalent to the physical measure P. 
Suppose for example that {S(t)} s>0 is the price of a certain asset at time t. 

The price of our asset at time t discounted to time 0 is then given by S(t) : = 

e-^ r(u ) du S(t). As a main property of the risk-neutral measure, the family of 

discounted prices -iSYt)! is a Q-inartingale. This means that for every s, 
t J tjs o 

0 < s < t, we have 


E 


S(t) I 


= E 


e~ So du S(t) I = e" So r (“) du S(s) = S(s) , (3.276) 


where expectations E are with respect to Q. Because of this property, a risk- 
neutral measure is also called an equivalent martingale measure. Roughly speak¬ 
ing, the existence of such a measure is equivalent with the no-arbitrage assump¬ 
tion. We will use this martingale property to price a zero-coupon bond. That 
is a financial debt instrument that pays the holder a fixed amount named the 


209 


Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


face value at maturity T. For simplicity we take 1 as face value. The price of a 
zero-coupon bond is then given by the following theorem. 


3.95. Theorem. Consider a zero-coupon bond which pays an amount of 1 at 
maturity T. The price at time t < T is then 


P(t,T) = E 




(3.277) 


Proof. We use the above explained property that the discounted price 
e -So r ( s '> ds P(t,T) is a martingale, and the trivial fact that P(T,T) = 1, 


e -$o < s ) ds p{t,T) = E \ e -^ r{s)ds P(T,T) | Tj = E 
= e-SoWl^E e~^ r ^ ds I 


e - So r(s) ds I gr 


(3.278) 


The bond’s price can thus be written as P(t,T ) = E 
completes the proof of Theorem 3.95. 


e - Sf r(s) ds 



This 

□ 


Formula (3.277) is an expression of the bond’s price for an arbitrary chosen 
interest rate process. We will now apply this to our Vasicek model {r(t)} t>0 . 
We will investigate three methods that all lead to the same result stated in the 
following theorem. We follow the approach of Mamon in [94]. For an alternative 
approach see [114] as well. 

3.96. Theorem. Consider a zero-coupon bond which pays an amount of 1 at 
maturity T. Suppose that under the risk-neutral measure the short rate follows 
an Ornstein-Uhlenbeck process: dr(t ) = a(b — r(t )) dt + a dW(t). The fair price 
of the bond at time t ^ T is then given by 

P(t, T) = g -A(t,T)r(t)+D(t,T) , (3.279) 


where 


A(t,T) 
D(t,T) 


l — g -a(T-t) 


and 


(A(t,T) - T + t)(a 2 b- a 2 / 2) a 2 A(t,Tf 


a z 


4 a 


(3.280) 


Equation (3.279) is an affine term structure model. In fact, the bond yield yt(T) 
is defined as the constant interest rate at which the price of the bond grows to 
it’s face value, i.e., P{t,T)e yt ^ T ' ) ^ T ~ t ' > = 1. We thus find that 

y t (T) = zTlfMl = A(t,T)r{t) - D(t,T) ' 

which is indeed affine in r(t). The yield curve or term structure at time t is the 
graph (T,y t (T)). 


210 


Download free eBooks at bookboon.com 

















Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


8.1.1. Bond price implied by the distribution of the short rate. The first 
method to calculate the bond price is quite straightforward. It calculates the 
conditional expectation in formula (3.277) by determining the distribution of 


E 


|j t T r(s) ds | J t 


First proof of Theorem 3.96. Because formula (3.277) shows that the 
bond’s price at time t is conditional on %, we may assume that r(t ) is a pa¬ 
rameter. Using formula (3.266) (for d = 1, and A = 1) with starting time t we 
find, for s > t, 


rls 


r(t)e 


—a(s—t) 


+ b( 1 - e“ a(s “ t) ) + J e~ a 


-a(s-p) 


adW(p). 


We want to determine the distribution of r(s > ds conditioned by “J t . Note 
that because of the Markov property of the Otnstein-Uhlenbeck process (or 
more generally for diffusion processes: see the equality in (3.249)), this distri¬ 
bution will only depend on r(t). Let’s start by determining the distribution of 
^r(s) ds given T t . This distribution is normal, and essentially speaking this it 
follows from Proposition 3.92 and Lemma 3.93. First of all from assertion (3) 
in Proposition 3.92 we get by (3.263) 


E 


I r ^ 


ds % 


E 


J r(s) ds | r(f) = A(t, T ) (r(f) — b) + (T — t)b. 

(3.281) 

Secondly from (3.264) and (3.269) in Lemma 3.93 we get 


var 


| X x (s)ds | X x (t)^j = | 


cr 2 (A(p,T)) 2 dp 


= -lT-t-A{t,T)--{A{t,T))‘ 


(3.282) 


a* \ 2 

The equality in (3.279) of Theorem 3.96 then follows from (3.282) and (3.265) 
in (4) of Proposition 3.92. □ 

8.1.2. Bond price by solving the PDE. A second method that is proposed to 
calculate the bond’s price in the Vasicek model, is by solving partial differential 
equations. More precisely, we will derive a PDE for the bond’s price by using 
martingales. 

Taking into account the Markov property of the process {r(t)} t>0 (see equality 
in (3.249)) one can introduce the following variable: 


P(t,T ) = E 
= E 


e -Sfr(s)ds I gr 


= E 


\z=r(t) 


g-St r(s)(z)ds 

Here r(s), s > t, is the function of r(t) given by 

r(s) = r(t)e- a(s - t} + b (l - + f e“ a 


e -h r(s)ds 


:P(t,T,r(t)). 

(3.283) 

e -a(s-p) dW ( p y 

(3.284) 


211 


Download free eBooks at bookboon.com 













Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


We now provide a second proof of Theorem 3.96. 


Second proof of Theorem 3.96. We will apply Ito’s formula to the 
function f(t,x ) = e~^o r ( s ') ds P(t i T, x). Then we obtain 


E 


e - £ r(s) ds _p ( 0) r(0) ) | gr t j = e - £ W) T, r(t)) — P (0, T, r(0)) 

f [~r(«)e~ % r(s) ds P (u, T, r(u)) + e~ % r(g)ds — J’ 1 d« 

Jo 


+ 


1 


,-CfrMcfa ^ (u,T,r(u)) 


(J 

+ T 


o L 

2 


J 

Jo 


dr{u) 

- ft r(«) d 2 P(R,T,r(n)) 


e Jo 


dr(v,y 


(a (6 — r(n)) du + cnifT('u)) 
du. 


(3.285) 


Put 


/(/) = -r(t)e~^ r ^ ds P(t,T,r(t)) + e ~So 

(/ 6 


+ 


dr(u) 


(a ( b-r(t ))) 


2 dr{t) 2 


(3.286) 



American online 

LIGS University 

is currently enrolling in the 
Interactive Online BBA, MBA, MSc, 
DBA and PhD programs: 


► enroll by September 30th, 2014 and 

► save up to 16% on the tuition! 

► pay in 10 installments/2 years 

► Interactive Online education 

► visit www.ligsuniversity.com to 
find out more! 


Note: LIGS University is not accredited by 
nationally recognized accrediting agency 
by the US Secretary of Education. 

More info here. 


212 

Download free eBooks at bookboon.com 


























Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


From the equality in (3.285) it follows that the process t >—► ^ f(u) du is a 
martingale. By Lemma 3.97 below it follows that f(t) = 0 P-almost surely. 
From (3.286) it then follows that the function P(t,T,x) satisfies the following 
differential equation: 


—xP ( t , T, x) + 


dP ( t , T, x) 
dt 


dP(t,T,x) 

+ ax (a{b 


x )) + 


a 2 d 2 P (t, T, x) 
2 dx 2 


From (3.283) and (3.284) it follows that 

~ i 1 ~ e_a(T_t) ) P(t, x ) = T ) P (T, x). 
From (3.288) we easily infer that 

P (t, T, x) = C(t,T)e~ Ait ’ T)x . 


= 0. 

(3.287) 

(3.288) 

(3.289) 


Inserting this expression for P (t, T, x) into (3.287) yields the first order equation 


—x + 


1 dC{t, T) dA(t, T) 


a 


C(t, T ) dt 


dt 


x — A(t, T) {a(b — x)} + — A(t , T ) 2 = 0. 

(3.290) 


Because —1 — ~ + aA(t,T) = 0, the equality in (3.290) implies: 

(J L 

(3.291) 

Since C(T,T) = P (T,T, 0) = 1 from (3.291) we infer C(t,T) = e D( ' t,T ' > and 
hence 

P(t,T) = P(t,T,r(t )) = e -Mt,T)r(t)+D(t,T )^ 

which completes the proof of Theorem 3.96 by employing the PDE as formulated 
in (3.287). □ 


The equation in (3.287) is called the PDE for the bond price in the Vasicek 
model. 

3.97. Lemma. Let (f 1, (37) t>0 , P) be filtered probability space, and let the right- 
continuous adapted process {f(t)} t>0 be such that for some sequence of stopping 
times (n n ) n6N , which increases to go, the integrals ^ |/(s)| l[i, r „] ds are finite P- 
almost surely. If the process 1 1 —> u /(s) ds is a local martingale, then f(t) = 0 
P -almost surely for almost all t. 


PROOF. Fix 0 < T < GO. By localizing at stopping times (r^) 
n e N, r n | go {n —»• go) we may assume that 

rT 


E 


f I/« 


ds 


< oo. 


neN’ n ^ ‘ni 


(3.292) 


Otherwise we replace fit) with / (t) l[o, r '](0> an( ^ P rove that / (t) l[o, r '](^) = 0 
for all n e N. But then f{t) = 0, by letting n — > go. So we assume that (3.292) 


213 


Download free eBooks at bookboon.com 












Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


is satisfied. Then for 0 ^ s < K T we have 


f f(p) dp + E f f(p) dp | = E f f(p) dp\T s = f f(p) dp. (3.293) 

Jo Us J LJo J Jo 


From (3.293) we infer that E f(p) dp\3 r s = 0, P-almost surely, for all 0 < 

s < t < T. differentiating with respect to t then results in E [fit) | T s ] = 0 
P-almost surely for all 0 < s < t < T. But then, by the right-continuity of the 
process {/(£)}*> 0 if follows that 


f(s) = limE [f(t) | T s ] = 0, P-almost surely. 


This completes the proof of Lemma 3.97. 


□ 


8.1.3. Bond prices using forward rates. The third and last method to calcu¬ 
late a bond’s price in the Vasicek model, is based upon the concept of forward 
rates. Indeed, in the Heath-Jarrow-Morton pricing paradigm the closed-form 
of the bond’s price follows directly from the short rate dynamics under the so- 
called forward measure. Suppose we are at time t. We want to know the rate 
of interest in the period of time between T\ en 77 with t < T\ < 77. This is 
called the forward rate for the period between T\ and T 2 and we denote it by 
f (t, Ti, T 2 ). When the rates between time t and 7\ and between time t and T 2 
are known - write R\ and 7? 2 - we must have: 

e Ri (Ti-t) e /(i,Ti,T 2 )(T a -Ti) = e R 2 (T 2 -t)' 


Hence, we find for the forward rate 

n ( , rp rp X _ R 2 {T 2 ~ t) ~ Rl (Tl - t) 

J -2 -^1 

Applying this in our framework of bond prices, R\ and 7? 2 equal the bond yields: 


Ri = 


-log P{t,Tf) 


Ro = 


— log P (t, Tf) 


T { -1 ’ T 2 -t 

such that the forward rate is given by 

— log P (t, Tf) — log P (t, T\) 


f (t, Ti, T 2 ) = 


T 2 ~ Ti 

When Ti and T 2 come infinitesimally close to each other, we obtain a so-called 
instantaneous forward rate. The instantaneous forward rate at time T > t is 

log P(t,T)~ log T(M') d log P {t, t') , 


fit,T) = -Km 


T -V 


dt 1 


I t’=T 


Solving this partial differential equation for Pit, T) on [t,T] we find immedi¬ 
ately that 

Pit,T) = e~^ nt ’ s)ds . (3.294) 

Later on we will see that the link between the instantaneous forward rate and 
the short rate is the so-called forward measure. In the sequel, we will need two 
properties of conditional expectations under change of measure. These results 
can be found in [ 32 ] . In the following theorems P is a probability measure on a 


214 


Download free eBooks at bookboon.com 












Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


cr-algebra T, the probability measure Q « P is such that = Z. Furthermore 

9 is a sub-u-algebra of 3\ The symbol E denotes expectation w.r.t. P, while 
E^ stands for expectation w.r.t. Q. 


3.98. Theorem. In the notation of above, it holds that 


dQ \ 5 
dF L 


E [Z | 9] . 


Proof. Take an arbitrary B e 9- We need to show that 

Q(B) = E [E [Z | 9] 1 B ] ■ 

Indeed: 

E [E [Z | 9] 1 B ] = E [E [Z1 b | 9]] = E [Z1 B ] = Q(B). 

This completes the proof of Theorem 3.98. □ 

3.99. Theorem. For any T -measurable random variable X : 

E [Z | 9] E Q [X | 9] = E [ZX | 9] • 

PROOF. Let Y = E [Z | 9]- Take B e 9 arbitrary, then: 

E q [1 b E [ZX | 9]] = E [TIbE [ZX | 9]] = E [E [Y1 b ZX IS]] 

= E [Y1 b ZX] = E q [Y1 b X] = E q [E q [l b yx | 9 ]] 

= e q [i b e q [yx | g]]. 

In the first step we used that 1 #E [ZX | 9] is 9-mesurable. Hence we could 
apply Theorem 3.98 which tells us that dQ | g = YdF | . Because the previous 
reasoning holds for all B e 9 we must have: 

E [ZX | 9] = E q [YX | 9] = YE q [X | 9] 

what proves the claim in Theorem 3.99. □ 


As well as the economic term forward rates, we introduce the concept of a 
numeraire. A numeraire is a tradeable economic security in terms of which 
the relative prices of other assets can be expressed. This allows us not only to 
compare different financial instruments at a certain moment, it makes it also 
possible to compare the prices of assets at different times. A typical example 
of a numeraire is money. The random variable M (t) = e^o r ( s ) ds represents the 
value at time t of an asset which was invested in the money market at time 
0 with value 1. Recall that in accordance with the definition of a risk-neutral 
measure Q , the price of an asset relative to the money market is a martingale. 
In our new notation the expressions in (3.276) become: 


E 


S(t) , „ ] 

= E 

_M{t) 1 s _ 



e -ft »■(«)*.£(£) I j 1 = e -ft r(n)du S ^ 


S(s) 
M(s )’ 


with 0 < s < t. We say that Q is an equivalent martingale measure for the 
numeraire {M(t)} t>0 . Let N(t) be the price at time t of another traded asset. 


215 


Download free eBooks at bookboon.com 









Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Suppose that Q* is an equivalent martingale measure for {N(t)} t>0 , i.e. for all 
0 < s < t: 


E* 


S(t) 

m 


gw 

N(‘Y 


We can also define this measure on the basis of the Radon-Nikodym derivative 
of Q* w.r.t. Q. 


3.100. Theorem. Suppose that Q is an equivalent martingale measure for the 
numeraire {M(t)} t>0 . Let Q* be an absolutely continuous measure w.r.t. Q 
defined by the Radon-Nikodym derivative: 


dQ* , _ M(0) N(t) 
~dQ M{t) N(0) ’ 


(3.295) 


where N(t ) > 0 is the price at time t of a particular asset. Then Q* is an 
equivalent martingale measure for {N{t) : t ^ 0}. 


Proof. Denote expectations w.r.t. Q by E and w.r.t. Q* by E*. Let 
S(t) be the price of an asset at time t ^ 0 and assume S(t) e L 2 (Q, T t ,Q) n 
L 2 (0,3y, Q*). For t ^ s ^ 0 we find using Theorem 3.99 


E* 



= E 

_N(t) 1 s _ 



M(Q)N(t) S(t) 
M(t)N( 0) N(t) 


J. 


/E 


M(0)N(t) 


M( 0) 

m 


E 


Sit) 
M(t ) 


To 


M(t)N( 0) 
N(0) M(s) _ S(s) 
M( 0) N(s) ~ N(s)' 




The proof of Theorem 3.100 is complete now. 


□ 



A cate-Lucent 


www.alcatel-lucent.com/careers 


What if 
you could 
build your 
future and 
create the 
future? 


One generation’s transformation is the next’s status quo. 
In the near future, people may soon think it’s strange that 
devices ever had to be “plugged in.” To obtain that status, there 

needs to be “The Shift". 


216 



Download free eBooks at bookboon.com 

































Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Note that the measures Q and Q* are equivalent because of the strictly posi¬ 
tiveness of the Radon-Nikodym derivative. We already mention the following 
theorem which transforms the dynamics of a process under Q to a process under 

Q*. 


3.101. Theorem. LetQ be an equivalent martingale measure for {M(t)} t>0 and 
let Q* be defined by equation (3.295). Assume {X(t) : 0 < t < t) a diffusion 
process with dynamics under Q 

dX(t ) = b (t, X(t)) dt + a (; t , X(t)) dW(t). 

Let also M(t ) and N(t ) have dynamics under Q given by 

dM(t) = m,M dt + <jm dW(t), dN(t ) = dt + ctn dW(t). 

Then the dynamics of {X (t) : 0 < t < t} under Q* is given by 

dX(t) = b(t,u) dt — a (t, u) ~ dt + a (t, uf) dW*(t), 

where 

w ’ (t) = w(t)+ i(wr)-m) ds - 


Proof. It is clear that we want to apply Girsanov’s Theorem 3.94. But then 
we need to know how 0 (t,u ) in expression (3.274) looks like. From expression 
(3.275) we know that 

dT t = -9(t,-)T t dW(t). (3.296) 

On the other hand: 


_ M(0) / N(t) \ _ Af (0) dN(t)M(t) - N(t)dM(t) 

t ~ m \W)) ~ W) M(t) 2 

= jy(0 jMfff 2 (( mNdt + a N dW(t)) M(t) - N(t ) ( m M dt + a M dW(t ))). 

(3.297) 


Because {T f : 0<t<T}isa martingale we must have that the coefficient of dt 
is 0, hence 


dlt 


Miff) 


N(0)M(t ) 2 
&n cm \ 


(a N dW(t)M(t ) - N(t)a M dW(t )) 


W)-W))™ 


Comparing this with (3.296) we have that 

a/. \ c M 

0 {t, UJ) 


Cn 


M(t ) N(t )' 

Finally applying Girsanov’s Theorem 3.94 to this we have 

cm cn 


dX (t) = b ( t , oj) dt — a ( t , u>) 


M(t) N(t ) 


dt + a (t,uj) dW* ( t ), 


217 


Download free eBooks at bookboon.com 

















Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


with 


W*(t) = W(t) + 


r ( % 

Jo \M(s) 


<JN 


M{s) N(s ) 

Altogether this completes the proof of Theorem 3.101. 


ds. 


□ 


In order to make the link between the short rate r(t) and the instantaneous 
forward rate f(t,T ), we introduce a new measure Q T . Suppose again that Q is 
the risk neutral measure w.r.t. the money market and E the expectation w.r.t. 

Q. 


3.102. Definition. Take T ^ 0. The forward measure Q T is defined on by 
setting 


T r : = 


dQ T _ M( 0) 
~dQ ~ M(T) 


P(T, T)P (0, T) = e“^ r(s)ds P(0,T), 


where 


M(t) = e& r(s)ds , 


and 


P (t, T)= E 


e - if r (») ds 



By the previous theorem we conclude that Q T is an equivalent martingale mea¬ 
sure which has a bond with maturity T as numeraire. 


For t < T we can easily calculate T t as follows: 


T t := E [r T | T t ] = 


M( 0) 

¥{0/r) 


E 


P(t,T) 


P (0, T) M(t) 


= P -5o r ( s ) ds 


P(d]T) | 
M(T) 1 * 
PfaT) 

P (0,T) ’ 


where we used in the second to last equality that Q has {M(t)} t>0 as numeraire. 

Now we have all theoretical background information to formulate the third proof 
of Theorem 3.96. 


Third proof of Theorem 3.96. Denote as before the expectation w.r.t. 
Q by E and the expectation w.r.t. Q T by E T . We have by Theorem 3.99 that 
for any T^-measurable random variable X and t^T 


E r [Xj%] = T-'E [XT t | T t ] = E 


r T , 

x % 

r i 


= E 


M(t)P(T, T) 

~M(TWKT) 


X 




= E 


X- 


~ if r(s) ds 


,e 


P(t,T) 


We want to express the forward rate in terms of the short rate. We got a formula 
for the bonds price in function of both of them. Differentiating expression (16) 
towards T gives 


dP (t, T ) 

dT 


= E 


—r(T)e 
E r [r(T) | 


Jf r(s) ds | ^ 

?t]P(t,T). 


E r [~r(T)P(t,T) |T t ] 

(3.298) 


218 


Download free eBooks at bookboon.com 

























Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


In the second step we used the above reasoning with X = —r(T)P(t,T). Dif¬ 
ferentiating now formula (3.294) with respect to T gives 

dP (; t , T) 


dT 




(3.299) 


= a 


Comparing (3.298) en (3.299) we get the link between short rate and forward 
rate 

f(t,T)=E T [r(T)\%]. (3.300) 

Considering the right hand side of (3.300) we will need to describe the dynamics 
of r(t) under Q T . Applying Ito’s formula on f(t,x ) = ds we immediately 
find that dM(t ) = r(t)M(t ) dt. In the notation of Theorem 3.101 we thus have 
cm = 0. If we then apply this theorem with X(t) = r(t), Q* = Q T , a (t,X(t)) = 
a, b (t, X(t)) = a (b — r(t )) and cpy = — crA (t, T ) P (t, T) we obtain: 

dr(t) = (ab — a 2 A (t, T ) — ar(t )) dt + a dW T {t ) 

b - (1 — e- a(T - f) ) - rtj dt + a dW T (t ) 

where W T (t) is the Q r -Brownian motion defined by 

W T {t) = W{t) + a f A(s,T ) ds. 

Jo 

Expression (3.301) resembles an ordinary Vasicek process, except that the term 
^2 

b -- (l — e _a ( T_t D does depend upon t and is thus not a constant. However, 

cr v ' 

we will use a similar reasoning as in the classical situation to solve the SDE for 
r(t) on the interval [t,T]. First, we apply Ito’s formula on g(t,x ) = e at x: 

1 _ p -a(T-t) 

d (e at r(t)) = abe at dt - a 2 e at dt + ae at dW T (t ) 


(3.301) 


= abe at dt- — (e at - e~ a{T ~ 2t) ) dt + ae at dW T {t). 

Integrating from t to T gives 
e aT r{T) - e at r{t) 


= b(e a 


e as ds 


- ^ | (e as - e - a{T ~ 2s) ) ds + a | 


*) 


C7 

a 

a 2 




i 


V — ( 

/ O^, V 


2 a 

-a( T ~ 2 ‘) 


e as dW T (s ) 

- a(T -20) +CJ | e as dW T (s) 


) +C7 l eaSdWT 


= b (e aT - e at ) -- ( e aT - 2e at + e 

v ' 2 a 2 

Thus we have that 

r(T) = r(t)e- a(r “ t) + b (l - e -“( r -*)) - ^ (l - 2e“ a(T - t) + e -2«(r-0) 

ZiLL 

rjn 

| e~ a{T ~ s) dW T {s). 


+ a 


219 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


And hence, 
f(t,s) = E s [r(s )\%] 

2 

= r(t)e~ a ^ +b( 1 - e - a(s - 4) ) - G — (l - 2 e -“ (s -* ) + e" 20 ^) 

\ J a 2 \ / 

= r(t)e~ a ^ + (b- — 2 ) (1 - e- a(s - t} ) + -A ( e -<W-9 _ e -M*-t )) _ 


Integrating results in 

f E> I * = IT f 1 - + ( 6 -1?) ( r - * - 


+ 


a 

2 


(l_ e -a(T- t) )_i_(i_ e - 2cl (T- t )) 

(: T-t-A(t,T )) + '—A(t,TY 


r(t)A(t,T ) + 6 


cr 


= r(t)A(t,T) - D(t,T). (3.302) 

Reminding formula (3.294) and formula (3.300) we find again that 

p(t : T) = e~ A ( t ’ T ') rt+D ( t ’ T \ 

This completes the third proof of Theorem 3.96. □ 





In the past four years we have drilled 

* 


81,000 km 

A 


That's more than twice around the world. 



Whn am wp? fHSHHHH 


P 

We are the world's leading oilfield services company. Working 1 


globally—often in remote and challenging locations—we invent, 
design, engineer, manufacture, apply, and maintain technology 
to help customers find and produce oil and gas safely. 



Who are we looking for? 

We offer countless opportunities in the following domains: 

■ Engineering, Research, and Operations ^ 

■ Geoscience and Petrotechnical 

■ Commercial and Business 

A ^ 


If you are a self-motivated graduate looking for a dynamic career, 
apply to join our team. 

What will you be? 

careers.slb.com 

Schlumberger 


220 



Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


9. A version of Fernique’s theorem 


The following theorem is due to Fernique. We follow the proof of H.H. Kuo 
[77], 

3.103. Theorem. Let (f2,T, P) be a probability space and let X : fl — » be a 

Gaussian vector with mean zero. Put 


1 


a = sup 

U,V>0 


log 


P( 

\x 

P v ) 

P(l 

\x\ 

> u ) 


(3.303) 


Then a > 0 and 


E ( exp I -rj |X| 2 ) 1 < go for r/ < a. 


(3.304) 


For the proof we shall need two lemmas. The first one contains the main idea. 


3.104. Lemma. Let (fl,T, P) and X be as in Theorem 3.103. Let s > 0 be such 
that P (|X| < s) > 0 and fix t > s. Then 


P (\X\ > t) ^ ( P (\X\ >{t- s)/V 2) 
F{\X\ < s) ^ \ P(|X| ^ 7) 


2 


(3.305) 


PROOF. Let (fl ® fl, T ® T, P ® P) be the tensor product space of (fl, T, P) 
with itself and define X i: i = 1, 2, by Xi(u>i,uj 2 ) = X(ujf). Then the variables 
Xi and X 2 are independent with respect P ® P and their P ® P-distribution 
coincides with the P-distribution of X. We shall prove Lemma 3.104. for s = v 
and t = u \/2 + v. Since the vector ( Xi,X 2 ) is Gaussian with respect to P®P 
and since the components of X\ — X 2 are uncorrelated with the components 
of Xi + X 2 (with respect to the probability P ® P), it follows that the vectors 
Xi — X 2 and Xi + X 2 are independent. Notice that $XdP = JXidP® P = 
J X 2 dP ® P = 0 and that the covariance matrices of X, of (X 1 — X 2 ) /y'd and 
of (X\ + X 2 ) / y'2 all coincide. It follows that the joint distributions of (X\. X 2 ) 

and of ( — 1 are the same as well. Hence the following (in- 


V2 ’ V2 

)equalities are now self-explanatory: 


P (|X| ^ u) P > u\l 2 + = P®P(|Xi| ^ v) x P ® P 1 > u\! 2 + v'j 

= P®P^|Xi|<u and \X 2 \ > u\[2 + v'j 
= P® P ^|Xi — X 2 \ < vV2 and \Xi + X 2 \ > 2u + vV^j 

< P®P ^||Xi| — |X 2 || ua/ 2 and |Xi| + \X 2 \ > 2u + vV^j 

< P®P(|Xi| > u and |X 2 | > u) = P (|X| > uf . (3.306) 

Inequality (3.305) in Lemma 3.104 follows from (3.306). □ 


221 


Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.105. Lemma. Let (fl, T, P) and X be as in Theorem 3.103. Let v > 0 be such 
that P (|X| y v) > 0 and fix £ e N and fix u > 0. Then the following inequality 
is valid: 


P (j 

x\ 

\>u(V2)' + v( 

(V2) 

■r 1 ) 

1 (a/2 + 1) 

L, 

mi 

\X\ 

> u) 

P(l 

\x\ 

<0 " 1 

\F(\X 

y v) 


2 * 


(3.307) 


PROOF. For £ = 0 this assertion is trivial and for £ = 1 it is the same 
as inequality (3.305) in Lemma 3.104. Next suppose that (3.307) is already 
established for £. We are going to prove (3.307) with £ + 1 replacing £. Again 
we invoke inequality (3.305) to obtain 


P (|A'| > u (V2 )' +1 + v ((V2)' + ‘ - l) (V 2 + 1 )) 

P(|X| « v) 

[P(\X\> (« (V2)' +1 + v ((V2)' +1 - l) (V2 + 1) - „) /X) 

* ( 

/P (|A| > u (V 2 )' + t, ((V 2 )' - l) (V 2 + 1 )) 7 

PPM ) 


(induction hypothesis) 




P(| 

x\ 

> u ) 

P(|X| 

/A 


2 i+1 


The inequality in (3.308) completes the proof of Lemma 3.105. 


(3.308) 

□ 


Proof of Theorem 3.103. If X m 0, then there is nothing to prove. So 
suppose X 7 ^ 0 and choose strictly positive real numbers u and v for which 


P(|X| > u) 
P(|X| y v) 


< 1. Put 


a(u,v ) 




1 

(U + V ) 2 


log 


P( 

\x\ 

y v ) 

P(|X| 

> 0 


and 


P(|X| y v) 


f 2 (1 + V2) 2 

/ P (|A| > u) \ 2 (u + v) 2 


Then a(u,v ) > 0 and 0(u,v) = P(|A| ^ e)exp (— ^a(u,v)v 2 (l + V2) 2 ) < 1- 
For s ^ u choose t e N in such a way that 



222 


Download free eBooks at bookboon.com 













Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Then 


2* > 


(s + v (l + a/ 2 ))‘ 
2 (u + v ) 2 


> 


(1 +V2 y 

2 (u + v) z 2 (u + v ) 2 


+ 


and hence 

P(|X| > s) < P > u + v 

(inequality (3.307) in Lemma 3.105) 


2 -1 


2 + 1 


< P(|X| ^ v) 


P(|X| > u) 
P(|X| < v) 


< f3(u,v)ex p ( — -a(u, v)s 2 j < exp ( — -a(u,v)s 2 


(3.309) 


If 0 < rj < a, then we choose u, v > 0 in such a way that a > a(u, v ) > rj. 
Then, for s ^ u, P(|X| > s) < exp (— ^a(u, u)s 2 ). Consequently, we get from 
(3.309): 


IE ( exp ( -rj \X\ ) :\X\ >u 


fH 


P ( exp ( ^ |X | 2 ) > f, \X\ > u ) di 


(substitute £ = exp (|r^s 2 )) 

< exp P (|X| > u) + r) r P (|X| > s ) exp (^V s2 ^j s ds 


+ exp 


P(|X| > u) + rj J 


1 


.2 \ , V 


< exp \-rju + - . 

2 J a(u,v)-rj 

a(u,v ) (\ 

< -7 -\-exp -rju‘ 

a(u, v) — rj \2 


exp ( — - (a(u,v) — rj) s 2 ) sds 


1 


exp ( —- ( a{u , v ) — ?y) u 2 


From (3.310) we infer 

E (exp (jxM 3 )) < (/+ 

Inequality (3.311) yields the desired result in Theorem 3.103. 


(3.310) 

(3.311) 
□ 


10. Miscellaneous 

We begin this section with the Doob’s optional stopping property for discrete 
time submartingales. Let { X{n ) : n e N} be a submartingale relative to the 
filtration {T n :neN}. Here the random variables X(n) are defined on a prob¬ 
ability space (f2,T, P). The following result was used in inequality (3.164), the 
basic step for the continuous time version of the following proposition. 


223 


Download free eBooks at bookboon.com 









Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


3.106. Proposition. Let r be a stopping time. The process 

{X(min(n, r)) :neN} 

is a submartingale with respect to the filtration {T n :neN} as well as with 
respect to the filtration {5F m i n (n,r) '■ n^N}. 



/ i 

Maastricht University 


Join the best at 
the Maastricht University 
School of Business and 
Economics! 

gjpj* 

• 33 rd place Financial Times worldwide ranking: MSc 
International Business 

• 1 st place: MSc International Business 

• 1 st place: MSc Financial Economics 

• 2 nd place: MSc Management of Learning 

• 2 nd place: MSc Economics 

• 2 nd place: MSc Econometrics and Operations Research 

• 2 nd place: MSc Global Supply Chain Management and 

Change 

Sources: Keuzegids Master ranking 2013; Elsevier ‘Beste Studies' ranking 2012; 

Financial Times Global Masters in Management ranking 2012 


Maastricht 

University is 
the best specialist 

university in the 
Netherlands 

(Elsevier) 

Master's Open Day: 22 February 2014 

www.mastersopenday.nl j 


224 

Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


PROOF. Let m and n be natural numbers with m < n and let A be a 
member of 3 m . Then we have 

E (X (min(n, r)) 1^) — E (X (min(m, r)) 1^) 

n 

= {E(X(min(fc,r)) In) - E (X (min(fc - l,r)) U)} 

k=m +1 
n 

= 2 {E((X(min(fc,r))-X(min(A:-l,r)))U n{T>fc} )} 

k=m+l 

n 

= E {E ((X (min(A:, r)) - X (min(fc - 1, r))) 1 A n{r>k}) \ 3 k ~ 1 } 

k=m+l 

(the event .A n {r ^ A:} belongs to 3 k ~i for k ^ m + 1, and the variable X(/c — 1) 
is Tfc-i-measurable) 

n 

= E((E(X(/c)|T fc _ 1 )-X(/c-l))l An{T ^ } ) 

fc=m+l 

(submartingale property of the process (X(A;) : k e N}) 

n 

> 2 E (0 x 1 An{r>k}) = 0. (3.312) 

fc=m+1 

The inequality in (3.312) proves that the process {X(min(A;, r)) : k e N} is a sub¬ 
martingale for the filtration : k e N}. Since the cr-field T m i n (^ T is contained 
in the a-field 3T, k e N, it also follows that the process 

{X(min(fc,r)) : k e N} 

is also a submartingale with respect to the filtration (Tmin^) : k e N} because 
we have 

E (X(min(m + l,r)) | T min ( m)r )) = E (E (X(m + 1) | 3 m ) | 3m in(m>T )) 

> E (E (X(min(m + l,r)) | T rn ) | T min(m , r) ) 

(employ (3.312)) 

> E (X(min (m,r)) | 3m in ( m;T )) = X(min (m,r)). (3.313) 

The inequalities (3.312) and (3.313) together prove the results in Theorem 3.106. 

□ 

Next we prove Doob’s maximal inequality for martingales. 

3.107. Proposition. Let { M(n ) : n e N} be a martingale. Put 

M(n)* = max|M(n)|. 

k^n 

The following inequalities are valid: 

P lM{n )* ^ A] < ^-E [\M(n)\ : M(n)* ^ A]; (3.314) 

A 


225 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


P \M(ri)* ^ A] < [|M(n)| 2 : M{n)* ^ X] . (3.315) 

Let {M(t) : t ^ 0} be a continuous time martingale that is right continuous and 
possesses left limits. Put M(t)* = sup n<(!< JM(s)|. Aqain inequalities like 
(3.314) and (3.315) are true: 

P {M(ty ^ A} < \E{\M(t)\ : M{ty > A}; (3.316) 

A 

P { M(ty ^ A} ^ {|M(t)| 2 : M(ty ^ A} . (3.317) 


Proof. We begin by establishing inequality (3.314). Define the events A k , 
l^k^n,b y A o = {\M(0)\>X}, 

A k = {\M(j)\ < A,0 ^ j k - 1 , \M{k)\ > X} , 1 < k < n. 

Then (J^ =0 A k = { M(n )* ^ A}, A k n An = 0, for k =|= £, 1 ^ k, i ^ n, and 
A k is Tfc-measurable for 1 < k < n. Moreover on the event A k the inequality 
\M{k)\ ^ A is valid. From the martingale property it then follows that: 

P(M(«)* 3= A) = £>(40 =S i|]E(UJM(i)|) 

k =0 A k =0 

= i f; E (u, |E (M(n) T4I) 

^ k= 0 

= li]E(|E(n,M(n)|:Ji)|) 

^ k= 0 

« i 2 E (E (n,|M(n)| |%)) 

^ k= 0 

= i 2 E (U, \M(n)\) = 1 e (\M(n)\ : M(n)' 3= A). (3.318) 

/c=0 

Notice that inequality (3.318) is the same as (3.314). The proof of (3.315) 
goes along the same lines. The fact is used that the process {|M(n)| 2 : neN} 
constitutes a submartingale. The details read as follows. The events A k , 1 ^ 
k < n, are defined as in the proof of (3.314). The argument in (3.318) is adapted 
as below: 


n n 

P (M(»)* > A) - 2 P (A) s T5 2 E (n„ IM(fc)| 2 ) 

k =0 k =0 

-i n -i n 

=5 P 2 E (n»E {\M(nf I Si)) < (U* l M (")| 2 ) 

/c—0 fc—0 

1 71 1 

= T5 2 E (u, |Af(„)| 2 ) = -E (|A/(„)| 2 : M(„)> 3= A). 

k =0 

(3.319) 


226 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Again we notice that (3.319) is the same as (3.315). The inequalities in (3.316) 
and (3.317) are based on a time discretization of the martingale { M(t ) : t F 0}. 
Therefore we write N(j) := M (j2~ n t) and we notice that {N(j) : j e N} is 
a martingale for the filtration : j e N}. From (3.314) we obtain the 

inequality: 

P (rnM |JV(j)l » a) « 1e (|M(f)| : M(t)‘ S> A). (3.320) 

Inequality (3.317) is obtained from (3.320) upon letting tend n to go. The proof 
of (3.317) follows in the same manner from (3.315). □ 

We continue with a proof of the (DL)-property of martingales. More precisely 
we shall prove the following proposition. 

3.108. Proposition. Let {M(s) : s ^ 0} be a right continuous martingale on 
the probability space (O, T, P). Fix t ^ 0. Then the collection of random vari¬ 
ables 

{. M(t ) : 0 < r ^ t, r stopping time } 
is uniformly integrable. 

PROOF. Fix a stopping time 0 ^ r ^ t and write r n = min(2 _n [2 n t\,t). 
Then 0 ^ r n ^ t and every r n is a stopping time. Moreover r n [ r if n tends 
to oo. Since the pair (. M(r n ),M(t )) is a martingale for the pair of cr-fields 
(9y n ,Tt) (Use Proposition 3.106 for martingales), the pair (|M (r n )|, \M(t)\) is 
a submartingale with respect to the same pair of cr-fields. As a consequence we 
obtain: 

E(|M(r)| : |M(r)| ^ A) = E (liminf |M(r n )l { | M ( T „)>A}|) 

(Fatou’s lemma) 

lirninfE (\M(r n )\ l{\M{r n )\>\}) 

(submartingale property) 

< liminf E (E (\M(t)\ \ T Tn ) l{| M (r n )|^A}) 

n^> oo v v 1 7 7 

= liminf E (\M(t)\ 1{|m(t„)|^a}) 

n^> oo 

= liminf E (|M(t)| : |M(r n )| ^ A) 

n —>oo 

< E : \M(t)\ > A). (3.321) 

This proves Proposition 3.108. □ 

Remark. In the proof of Proposition 3.108. we did use a discrete approximation 
of a stopping time. However we could have avoided this and consider directly the 
pair (M(r), M(t)). From Proposition 3.107 we see that this pair is a martingale 
with respect to the pair of cr-fields (T r , T t ). This will then imply inequality 
(3.321) with r replacing r n . On the other hand the discrete approximation 
of stopping times as performed in the proof of Proposition 3.108 is kind of 


227 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


a standard procedure for passing from discrete time valued stopping times to 
continuous time valued stopping times. This is a good reason to insert this kind 
of argument. 

The main result of Section 3 of this chapter says that linear operators in Cq(E ) 
which maximally solve the martingale problem are generators of Feller semi¬ 
groups and conversely. In the sequel we want to verify the claim in the example 
of Section 3. Its statement is correct, but its proof is erroneous. Example 3.49 
in Section 3 reads as follows. 

3.109. Example. Let L 0 be an unbounded generator of a Feller semigroup in 
Cq(E) and let /i& and Z 4 , 1 ^ k ^ n, be finite (signed) Borel measures on E. 
Define the operator L as follows: 

=n y 

k =1 ^ 

Lwf = Lof, feD (Lppj . 

Then the martingale problem is uniquely solvable for Lp^. In fact let 

{(a T, P*), (X(t) £)} 

be the strong Markov process associated to the Feller semigroup generated by 
L 0 . Then P = P.,. solves the martingale problem 

(a) For every / e D(L^p) the process 

/(*(*)) - /(X(0)) - f L p ,pf(X(s))ds, t > 0, 

Jo 

is a P-martingale; 

(b) P(X(0) = x) = 1, 

uniquely. In particular we may take E = [0,1], Lof = \ f" , 

O(t„) = {/EC 2 [0.1]:/'(0)=/'(l)=0}, 

Hk (I) = u k = 0, 0 ^ a,k < Pk ^ 1, 1 < k ^ n. Then L 0 generates 

the Feller semigroup of reflected Brownian motion: see Liggett [86], Example 
5.8, p. 45. For the operator Lp t p the martingale problem is uniquely (but not 
maximally uniquely) solvable. However it does not generate a Feller semigroup. 

From the result in Theorem 3.45 this can be seen as follows. Define the func¬ 
tionals A j : D(Lq) —*■ C, 1 ^ j ^ n, as follows: 

Lj(f) = J Lofcl+tj - J./Vizq, l^j^n. 

We may and do suppose that the functionals A ? , 1 p j P n, are linearly 
independent and that their linear span does not contain linear combinations of 
Dirac measures. The latter implies that, for every xo e E and for every function 
u e D(L 0 ), the convex subsets 

D (Li) n {{g e Cq(E) : Re g ^ Re g(x o)} + u} and 


D (£„>-) 



228 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


D (Li) n {{h e Co(E) : Re h ^ Re h(xo)} + u} 

are non-void. The latter follows from a Hahn-Banach argument. Hopefully, 
it will also imply that the quantities in (3.322) and (3.323) coincide. Since 
D(Lq) forms a core for L 0 we may choose functions u1 < k < n, such that 
A j(uk) = Sjk and such that every Uk- ; 1 A k y; n is in the vector of the two 
spaces 

{ue D(Ll) : R(l)u e D(U)} and {u e D(L 2 0 ) : R(2)u e D(Li)} . 

As operator L\ we take Li = and for T we take Tf = 
f e D(L q ). 


> Apply now 



REDEFINE YOUR FUTURE 

AXA GLOBAL GRADUATE 
PROGRAM 2015 


redefining /standards Qr 



229 


^0 


Click on the ad to read more 


Download free eBooks at bookboon.com 






Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


The remainder of this section is devoted to the proof of the following result. 
Whenever appropriate we write R(X) for the operator (XI — L 0 ) _1 . 

3.110. Theorem. Let L 0 be the generator of a Feller semigroup in Cq{E ) and 
let Li and T be linear operators with the following properties: the operator I — T 
has domain D(L 0 ) and range D(Li), L\ verifies the maximum principle, the 
vector sum of the spaces R(I — T ) and R{Li{I — T)) is dense in Co(E), and 
the operator L\{I — T) — (/ — T)L 0 can be considered as a continuous linear 
operator in the domain of L 0 . More precisely, it is assumed that 

limsup 1(^(7 - T) - (/ - T)L 0 ) R(A)|| < 1. 

A—»oo 

Then there exists at most one linear extension L of the operator L\ for which 
LT is bounded and that generates a Feller semigroup. In particular, if the 
martingale problem is solvable for L\, then it is uniquely solvable for L\. 

Before we actually prove this result we like to make some comments. In order 
to have existence and uniqueness for the extension L on R(T) it suffices that 
for every v e R(T) and for every xq e E the following two expressions are equal: 

lim inf jRe L 1 f(x 0 ) : inf Re (f(y) - v(y)) > Re (f(x Q ) - v(x Q )) - e 

40 feD(Li) ( yeE 

(3.322) 

lim sup \ Re L 1 f{x 0 ) : sup Re (f(y) - v(y)) < Re (f(x 0 ) - u(x 0 )) + e 

£ 1° feD(L i) f yeE 

(3.323) 

This common value is then by definition Re L 2 v(x 0 ). The value of L 2 v(x 0 ) is 
then given by [L 2 v] (x 0 ) = Re [. L 2 v] (x 0 ) — iRe [L 2 (iv)] (a; 0 ) for v e R(T). Let 
A j, 1 F .j A n, be as in the example of section 1. For every xo 6 E and for 
every 1 ^ k ^ n there exist functions gk and hk e D(L 0 ) with the following 
properties: A e (g k ) = A £ (h k ) = S k/ , Re g k (x) ^ Re g k (x 0 ) and Re h k (x 0 ) ^ 
h k (x) for all x e E and Re Li {h k — g k ) (xo) = 0. It then readily follows that 
the two expressions in (3.322) and (3.323) are equal for functions v in the linear 
span of Mi,... ,u n . Notice that the function Re h k attains its minimum at x 0 
and that the function Re g k attains its maximum at x 0 . In order to define 
[L±u k \ (a;o) we choose functions g k and h k with A e(g k ) = Re A e(h k ) = —S k / in 
such a way that the function Re g k attains its maximum at xq and that the 
function Re h k attains its minimum at the same point xq . Moreover we may 
and do suppose that Re Li (g k — h k ) (xo) = 0. The value [L 2 u k ] (xo) is then 
given by [L 2 u k ] (x 0 ) = [L x (g k + u k )] (x 0 ) = [L x (h k + u k )] (x 0 ). 

PROOF of Theorem 3.110. Let L be any linear operator which extends 
Li and that has the property that its domain D contains R(T) = TD(Lq). 

We also suppose that L verifies the maximum principle. Let L\ be the restriction 
of L to R(I — T ) and let L 2 be the operator L confined to R(T). We shall prove 
that the operator Li has a unique extension that generates a Feller semigroup. 
We start with the construction of a family of kind of intertwining operators 



230 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


{(A) : A > 0 and large}. This is done as follows. The symbol R( A) is always 
used to denote the operator R(X) = (XI — L 0 ) _1 . Define the operator V by 

V = Li(I — T) - (I - T)L 0 (3.324) 


and define the operator V(A), A > ||L 2 T|| (3 , via the equality 

V(X) = X(XI - L 2 T)~ l V. 


(3.325) 


Then we have: 

(XI - Li) (I-T) + (XI - L 2 ) = (i — T) (XI -Lo- V(X)) . (3.326) 

A 

An equivalent form of (3.326) is the equality 

(XI - U) (I -T) + (XI - L 2 ) T V -f = (I — T) ((XI - L 0 ) - D(A)) (3.327) 
= (I-T) (XI - L 0 ) -(I- T)V(X) = (I -T) (I - V (X)R(X)) (XI - L 0 ) . 


Next we shall prove that the martingale problem is solvable for L\. We do this 
by showing that the operator L\ extends to a generator L of a Feller semigroup. 
For large positive lambda we define the operators G( A) in Cq(E) as follows. For 
/ of the form f = (I — T)g, with g = (I — V(X)R(X))(XI — Lo)h, we write 

T) 

and if the function / is of the form / = (XI — L\) (I — T)g we write 

G(X)f = G(X)(XI - Li)(I - T)g = (I - T)g. (3.329) 


h. (3.328) 


G(X)f = G(X)(I — T)g = ll-T + T 


If (XI — Li) (I — T)gi = (I — T)g 2 , then, since I — T is mapping attaining 
values in the domain of Li, we see that (I — T)g\ — (I — T)g 2 belongs to 
D(Li) and hence the following identities are mutually equivalent (we write 
32 = (I ~ V(X)R(X)) (XI — Lq) h 2 ): 


(I ~ T) gi 
(I - T)gi -(I- T)h 2 
(XI — Li) ((I — T)gi — (I — T)h 2 ) 

(I ~ T)g 2 

(I-T) (I -V(X)R(X)) (XI - L q ) h 2 

(XI - Li) (I - T)h 2 + (XI - L 2 ) T^-h 2 

A 


I-T + T^pj h 2 - 

(XI - Ll )T^h 2 , 

(A/-L x ) + h 2 - 


(XI -LJl^I-T + T 
(XI -h) (i-T + T 


V(A) 


A 

y( a ) 

A 


^ 2 ! 


ho. 


(3.330) 


231 


Download free eBooks at bookboon.com 












Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Since g 2 = (I — V(X)R(X))(XI — L^)h 2 it follows that 

(a/ - Z) ((/ - t) (< h - h 2 ) - t i-t-h 2 ) 

(A I - L x ) ((/ - T) 9l -(I- T)h 2 ) - (XI - L 2 ) T^h 2 

= (/ - T)g 2 - (A I - L x ) (I - T)h 2 - (A I - L 2 ) 

= (I - T) (I - V(A)i2(A)) (A/ - L 0 ) h 2 - (A/ - L x ) (I - T)h 2 

- (XI - L 2 )T^h 2 

= (XI - L x ) (I - T)h 2 + (XI - L 2 ) T^-h 2 - (XI - L x ) (I - T)h 2 

X 

- (XI - L 2 ) T^XX)-h 2 = 0. (3.331) 

A 

Since the operator L verifies the maximum principle, it is dissipative, and so 
the zero space of XI — L is trivial. We conclude from (3.331) the identity 
TV(X)R(X)h 2 = (I — T)gi~(I — T)h 2 and so the function TV(X)R(X)h 2 belongs 
to D(L X ). Hence it follows that (3.330) is satisfied and consequently that the 
operator G( A) is well-defined. Next we pick h x and h 2 in the domain of L 0 and 
we write 

f = X(I- T ) (XI -L 0 - V(X)) h 2 + (XI - L x ) (I - T ) (h x - Xh 2 ). (3.332) 

A calculation will yield the following identities: 

G(X)f = (I-T)h x +TV (X)h 2 - 

XG(X)f — f = L X (I — T)h x + L 2 TV(X)h 2 = L (G(X)f ). (3.333) 

Consequently we get ^A I — L^j G(X)f = /, for / of the form (3.332). Since 
we know ||C(A)i?(A)|| /3 < 1 and since, by assumption the subspace R(I — T) + 
R(L X (I — T)) is dense in Co(E), it follows that the range R(XI — L ) is dense 
for A > 0, A large. Since the operator L satisfies the maximum principle and 
since L = L X (I — T) + L 2 T it follows that the operator L that assigns to 
G(X)f the function XG(X)f - f, f e R(I - T) + R(L X (I - T)), is well defined 
and satisfies the maximum principle. Below we shall show that the family 
{G(A) : A > 0, A large} is a resolvent family indeed: see (3.337). The closure 
of its graph contains the graph 

{((/ - T)h x + Th 2 , L\(I - T)h x + L 2 Th 2 ) :h u h 2 e D(L 0 )}. 

Denote the operator with graph {( G(X)f , XG(X)f — f) : f e Co(E)} again by L. 
From the previous remarks it follows that the operator L verifies the maximum 
principle, (XI — L)G(X)f = f for / e Cq(E) and that it is densely defined. The 
latter follows because its domain contains all vectors of the form 

(I - T)h + L X (I — T)/ 2 = (I-T) (f x + (I - T)L X (I - T)f 2 + TL 0 f 2 ) + TVf 2 . 


232 


Download free eBooks at bookboon.com 








Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


From a general argument it then follows that the operator L is the generator of 
a Feller semigroup: for more details see [Ml], Theorem 2.2 page 14. Next let 
hi and h 2 belong to D(L 0 ). Then we have 


A ||G(A) ((/ - T)h, + (XI — L,) (I — T)h 2 ) 
= A ||G(A)(7 - T)h, + (I- T)M„ 


(the operator L is dissipative) 




| (a/ - L) (G(X)(I - T)hi + (/ - T)h 2 ) 
\\(I-T)hi + (\I-Li)(I-T)h 2 \\ x . 


(3.334) 


Since the vector sum of the spaces R(I — T) and R(L\(I — T)) is dense it follows 
from (3.334) that the operator G( A) extends as a continuous linear operator to 
all of Cq(E). Moreover it is dissipative in the sense that 


A||G(A)||<1. (3.335) 

Next we prove that the operator G( A) is positive in the sense that / ^ 0, 
/ e Co(E), implies G(X)f ^ 0. So let / e Cq(E) be non-negative. There exist 
sequences of functions ( g n ) and ( h n ) in the space D (L () ), for which 


/ = hm ((/ - T)h n + (XI - h) (I - T)g n ). 

n—KX) 



Empowering People. 
Improving Business. 


Norwegian Business School is one of Europe's 
largest business schools welcoming more than 20,000 
students. Our programmes provide a stirnufating 
and multi-cultural learning environment with an 
international outlook ultimately providing students 
with professional skills to meet the increasing needs 
of businesses. 


B! offers four different two-yea i; full-time Master of 
Science (MSc) programmes that are taught entirely in 
English and have been designed to provide professional 
skills to meet the increasing need of businesses.The 
MSc piiogramme5 provide a stimulating and multi¬ 
cultural learning environment to give you the best 
platform to launch into your career 

* MSc in Business 


* MSc in Financial Economics 


* MSc in Strategic Marketing Management 


NORWEGIAN 
BUSINESS SCHOOL 


EFMD 

EQUIS 


*ffi 


* MSc in Leadership and Organisational Psychology 

www.bi.edu/master 


233 



Download free eBooks at bookboon.com 












Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Put /„ = (/- T)K + (A/ - L 0 (/ - T)g n . Then (a/ - Z) G(A)/ n = f n (see 

(3.332) and (3.333)). Since the operator L verifies the maximum principle it 
follows that 

ARe G(X)f n ^ inf Re (xi - L ) G(X)f n (y ) = inf Re f n (y ) (3.336) 

and hence Re XG(X)f = Re lim„^, x , XG(X)f n ^ 0. A similar argument will show 
that the operator G( A) sends real functions to real function and hence G(X) is 
positivity preserving. Next we prove that the family {G(X) : A > 0, large} is 
a resolvent family. So let A and g be large positive real numbers. We want to 
prove the identity 

G(X) - G(g) -(g- X)G(g)G(X) = 0. (3.337) 

First pick the function / 6 D(L 0 ) and apply the operator in (3.337) to the 
function (A I — L{) (/ — T)f and employ identity G( A) ^A I — L'j f = f, for / 

belonging to D (^L^j to obtain 

(G(X) - G(g) -(g- X)G(g)G{ A)) (XI - L x ) (I - T)f = 0. 

The operator in (3.337) also sends functions in the space R(I — T) to 0, because 
we may apply (3.333) to see that 

(l-il ~ Z) (G(X) - G(g) - (g - X)G(g)G(X)) (I -T)f = 0 

for / 6 D(L 0 ). Finally we show that the resolvent family (G'(A) : A > 0 large} 
is strongly continuous in the sense that lim^oo XR(X)f = f for all / e Cq(E). 
Of course it suffices to prove this equality for a subset with a dense span. Next 
we consider / e D(L 0 ) and we estimate 

||(/-r)/-AG(A)(/-T)/|| co 

as follows: 

|(/-T)/-AG(A)(/-T)/|L 
« } |(A/ - h) ((I - T)f - \G(\)(I - T)/)||„ 

- } |(A/ - L r ) (1 - T)f - A(J - T)/||„ = } ||L,(/ - T)f\\ x . (3.338) 

Again this expression tends to zero. For brevity we write 

For / e D(Lq) the following equalities are valid: 
(XG(X)-I)L l (I-T)f-L l (I-T)f 
= A 2 G(A)(I - T)f - X(I - T)f — L\(I — T)f 
= {A 2 (/ - T)R(X) + TV(X)R(X) 2 - X(I - T)(I - V(X)R(X))} F( A) 

— L\(I — T)f 

= {(/ - T)XL 0 R(X ) + X 2 TV(X)R(X) 2 + X (/ - T)V(X)R(X)} F( A) 


234 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


-u (I-T)f 

-» {(/ - T)Lq +TV + (I - T)Vj T)f = 0. (3.339) 

From (3.338) together with (3.339) we conclude that lim^oo ( XG(X)f — /) = 0 
for all in the span of R(I — T ) and R (L X (I — T)). By assumption this span is 
dense and consequently the resolvent family {G( A) : A > 0, A large} is strongly 
continuous. In order to conclude the proof of the existence result we choose f x 
and f 2 in the space D(Lq) and we notice the following identities: 

G(A) {(/ - T) (I - V(X)R(X)) (XI - L 0 )f x + (XI - L X )(I - T)f 2 } 

= (I- T)(h + h) + TV(\)R(\)f u 

and so the space G(X)Co(E) contains the linear span of the spaces R(I — T ) 
and R(TV(X)R(X)). From the resolvent equation it is clear that the space 
G(X)Cq(E ) does not depend on the variable A. So we see that the space 
G(a)Co(E) contains, for a given function / 6 D(L 0 ) the family {XTV(X)R(X)f : 
X ^ a}. Hence the function TVf = lim XTV(X)R(X)f belongs to the closure of 

A—»oo 

the space G(a)C 0 (E). Since L x (I-T)f = TVf + (I-T) (L x (I - T)f + TL 0 f), 
for / 6 D(L 0 ), we conclude that the range of L\(I — T) is contained in the clo¬ 
sure of G(a)Co(E). Since the latter space also contains R(I — T) it follows from 
the density of the space 

R(I-T) + R(L 1 (I-T)) 

in Cq(E) that the domain of the resolvent, i.e. G(o)Cq(E ) is dense in C$(E). 
From the previous discussion it also follows that the operator which assigns to 
G(X)f the function XG(X)f — f extends the operator Li restricted to R(I — T). 
It is now also clear that the subspace {G(a)f : f e Co(E)} is dense and so it is 
clear that the there exists a Feller semigroup generated by the operator L with 
graph {(G(a)f, aG(a)f - f) : f e C 0 (E)}. 


For the uniqueness we proceed as follows. Let P} and P^ : be two solutions for the 
martingale problem. We define the family of operators {S(t) : t V 0} as follows: 
S(t)f(x ) = E lf(X(t) — Elf(X(t)) from the martingale property, it then follows 
that S'(t)f = S(t)Lf for / belonging to the subspace R(I — T) + R(L\(I — T)). 
Moreover we have S(0)f(x) = 0 for all functions / e Cq(E). Then we write (for 
A and f 2 e D(L 0 )) 


-f 

-f 

-f 


~ ) e~ Xt S(t)G( A) ((/ - T)A + TL x (I - T)f 2 ) dt 


e~ Xt S(t) [XI-L) G( A) ((I - T)fi + TL X (I - T)f 2 ) dt 


e~ xt S(t ) ((I - T)f x + TL X (I - T)f 2 ) dt. 


(3.340) 


Consequently S(t)(I — T)f x = S(t)TL x (I — T)f 2 = 0 for all functions f x and f n 
in the space D(Lq). We also have, upon using (3.340) the following equality: 


f 


S(t)L x (I - T)fdr = S(t)(I - T)f - S(0)f = 0. 


(3.341) 


235 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


An introduction to stochastic processes: 
Brownian motion, Gaussian processes and martingales 


Since by assumption the sum of the vector spaces R(I — T ) and R(Li) is dense 
in the space Cq(E ), we conclude S(t) = 0 and hence from a general result on 
uniqueness of the martingale problem, we finally obtain that P* = P^ for all 
x e E. For more details see Proposition 2.9 (Corollary p. 206 of Ikeda and 
Watanabe [61]). This completes the proof of Theorem 3.110. □ 



236 



Download free eBooks at bookboon.com 




Advanced stochastic processes: Part I 


Index 


Index 


D : dyadic rational numbers, 380 
K: strike price, 191 
N(-): normal distribution, 191 
P'(Cl): compact metrizable Hausdorff 
space, 129 
S: spot price, 191 
T: maturity time, 191 
A-system, 1, 68 
S( 5 -set, 332, 334 

M: space of complex measures on M*', 298 

Vo’l 103 

7r-system, 68 

cr-algebra, 1, 3 

cr-field, 1, 3 

a: volatility, 191 

r: risk free interest rate, 191 

(DL)-property, 416 

adapted process, 17, 374, 389, 406 
additive process, 23, 24 
affine function, 8 
affine term structure model, 210 
Alexandroff compactification, 301 
almost sure convergence of 
sub-martingales, 386 
arbitrage-free, 190 

backward propagator, 197 
Banach algebra, 298, 303 
Bernoulli distributed random variable, 56 
Bernoulli topology, 310 
Beurling-Gelfand formula, 302, 303 
Birkhoff’s ergodic theorem, 74 
birth-dearth process, 35 
Black-Scholes model, 187, 190 
Black-Scholes parameters, 193 
Black-Scholes PDE, 190 
Bochner’s theorem, 90, 91, 308, 314 
Boolean algebra of subsets, 361 
Borel-Cantelli’s lemma, 42, 105 
Brownian bridge, 94, 98, 99, 101 
Brownian bridge measure 
conditional, 103 


Brownian motion, 1, 16-18, 24, 84, 94, 98, 
101, 102, 105, 108-110, 113, 115, 181, 
189, 193, 197, 243, 283, 290, 291 
continuous, 104 
distribution of, 107 
geometric, 188 
Holder continuity of, 154 
pinned, 98 
standard, 70 

Brownian motion with drift, 98 

cadlag modification, 395 
cadlag process, 376 

Cameron-Mart in Girsanov formula, 277 
Cameron-Martin transformation, 182, 280 
canonical process, 109 
Caratheodory measurable set, 363 
Caratheodory’s extension theorem, 361, 
362, 364 

central limit theorem, 74 
multivariate, 70 

Chapman-Kolmogorov identity, 16, 25, 

81, 107, 116, 149 

characteristic function, 76, 102, 390 
characteristic function (Fourier 
transform), 98 

classification properties of Markov chains, 
35 

closed martingale, 17, 150 
compact-open topology, 310 
complex Radon measure, 296 
conditional Brownian bridge measure, 103 
conditional expectation, 2, 3, 78 
conditional expectation as orthogonal 
projection, 5 

conditional expectation as projection, 5 
conditional probability kernel, 399 
consistent family of probability spaces, 66 
consistent system of probability measures, 
13, 360 
content, 362 
exended, 362 

continuity theorem of Levy, 324 


237 


Download free eBooks at bookboon.com 


Index 


Advanced stochastic processes: Part I 


contractive operator, 197 
convergence in probability, 371, 386 
convex function and affine functions, 8 
convolution product of measures, 298 
convolution semigroup of measures, 314 
convolution semigroup of probability 
measures, 391 
coupling argument, 288 
covariance matrix, 108, 197, 200, 203 
cylinder measure, 360 
cylinder set, 358, 367 
cylindrical measure, 89, 125 

decomposition theorem of Doob-Meyer, 20 
delta hedge portfolio, 190 
density function, 80 
Dirichlet problem, 265 
discounted pay-off, 209 
discrete state space, 25 
discrete stopping time, 19 
dispersion matrix, 94 
dissipative operator, 118 
distribution of random variable, 102 
distributional solution, 266 
Doleans measure, 168 
Donsker’s invariance principle, 71 
Doob’s convergence theorem, 17, 18 
Doob’s maximal inequality, 21, 23, 160, 
384 

Doob’s maximality theorem, 21 
Doob’s optional sampling theorem, 20, 86, 
381, 388, 409 

Doob-Meyer decomposition 

for discrete sub-martingales, 383 
Doob-Meyer decomposition theorem, 148, 
149, 295, 384, 410, 419, 421 
downcrossing, 157 
Dynkin system, 1, 68, 111, 300, 378 

Elementary renewal theorem, 38 
equi-integrable family, 369 
ergodic theorem, 295 
ergodic theorem in L 2 , 342 
ergodic theorem of Birkhoff, 76, 340, 344, 
354 

European call option, 188 
European put option, 188 
event, 1 
exit time, 84 

exponential Brownian motion, 186 
exponential local martingale, 254, 255 
exponential martingale probability 
measure, 192 
extended content, 362 
extension theorem 


of Kolmogorov, 360 
exterior measure, 364 

face value, 210 

Feller semigroup, 79, 113, 114, 120, 121, 
140, 264 

conservative, 114 

generator of, 118, 137, 140, 143, 144 
strongly continuous, 113 
Feller-Dynkin semigroup, 79, 122, 264 
Feynman-Kac formula, 181 
filtration, 109, 264 
right closure of, 109 
finite partition, 3 

finite-dimensional distribution, 373 
first hitting time, 18 
forward propagator, 197 
forward rate, 214 

Fourier transform, 90, 93, 96, 102, 251 
Fubini’s theorem, 199 
full history, 109 
function 

positive-definite, 305 
functional central limit theorem (FCLT), 
70, 71 

Gaussian kernel, 16, 107 

Gaussian process, 89, 110, 115, 200, 203 

Gaussian variable, 153 

Gaussian vector, 76, 93, 94 

GBM, 186 

geometric Brownian motion, 189 
generator of Feller semigroup, 118, 137, 
140, 144, 228, 230, 231, 233 
generator of Markov process, 200, 203 
geometric Brownian motion, 188 
geometric Brownian motion = GBM, 186 
Girsanov transformation, 182, 243, 280 
Girsanov’s theorem, 193 
graph, 232 

Gronwall’s inequality, 246 

Holder continuity of Brownian motion, 
154 

Holder continuity of processes, 151 
Hahn decomposition, 295 
Hahn-Kolmogorov’s extension theorem, 
364 

harmonic function, 86 
hedging strategy, 188 
Hermite polynomial, 258 
Hilbert cube, 333, 334 
hitting time, 18 

i.i.d. random variables, 24 


238 


Download free eBooks at bookboon.com 


Index 


Advanced stochastic processes: Part I 


index set, 11 

indistinguishable processes, 104, 374, 386 
information from the future, 374 
initial reward, 40 
integration by parts formula, 282 
interest rate model, 204 
internal history, 374, 394 
invariant measure, 35, 48, 51, 201, 204 
minimal, 50 

irreducible Markov chain, 48, 51, 54 
Ito calculus, 87, 278, 279 
Ito isometry, 162 
ltd representation theorem, 274 
Ito’s lemma, 189, 270 

Jensen inequality, 149 

Kolmogorov backward equation, 26 
Kolmogorov forward equation, 26 
Kolmogorov matrix, 26 
Kolmogorov’s extension theorem, 13, 17, 
89-91, 93, 125, 130, 357, 360, 361, 

366 

Komlos’ theorem, 295, 409, 420 

Levy’s weak convergence theorem, 115 
Levy process, 89, 389, 390, 392 
Levy’s characterization of Brownian 
motion, 194, 249 
law of random variable, 102 
Lebesgue-Stieltjes measure, 364 
lemma of Borel-Cantelli, 10, 152 
lexicographical ordering, 333 
life time, 79, 117 

local martingale, 194, 252, 264, 267, 268, 
271, 278, 280 
local time, 292 

locally compact Hausdorff space, 15 

marginal distribution, 373 
marginal of process, 13 
Markov chain, 35, 44, 58, 59, 66 
irreducible, 48, 54 
recurrent, 48 

Markov chain recurrent, 48 
Markov process, 1, 16, 29, 30, 61, 79, 89, 
102, 110, 113, 115, 119, 144, 202, 406, 
408 

strong, 119, 406 
time-homogeneous, 407 
Markov property, 25, 26, 30, 31, 46, 82, 
110, 113, 142 
strong, 44 

martingale, 1, 17, 20, 80-82, 85-88, 103, 
109, 243, 280, 281, 378, 382, 396 


(DL)-property, 227 
closed, 17 
local, 194 

maximal inequality for, 225 
martingale measure, 209, 281 
martingale problem, 118, 128, 137, 140, 
143, 144, 228, 230, 231, 235, 264, 265 
uniquely solvable, 118 
well-posed, 118 
martingale property, 131 
martingale representation theorem, 263, 
275 

maximal ergodic theorem, 351 
maximal inequality of Doob, 386 
maximal inequality of Levy, 104 
maximal martingale inequality, 225 
maximum principle, 118, 140, 141, 143, 
232 

measurable mapping, 377 
measure 

invariant, 48, 201, 204 
mesaure 

invariant, 204 
mesure 

stationary, 204 
metrizable space, 15 
Meyer process, 419 
minimal invariant measure, 50 
modification, 374 

monotone class theorem, 69, 103, 107, 

110, 112, 116, 378, 394, 398, 401, 404 
alternative, 378 

multiplicative process, 23, 24, 79 
multivariate classical central limit 
theorem, 70 

multivariate normal distributed vector, 76 
multivariate normally distributed random 
vector, 93 

negative-definite function, 314, 316, 396 
no-arbitrage assumption, 209 
non-null recurrent state, 51 
non-null state, 47 

non-positive recurrent random walk, 57 
non-time-homogeneous process, 23 
normal cumulative distribution, 188 
normal distribution, 197 
Novikov condition, 281 
Novikov’s condition, 209 
null state, 47 
numeraire, 215 

number of upcrossings, 156, 379, 380 

one-point compactification, 15 
operator 


239 


Download free eBooks at bookboon.com 



Index 


Advanced stochastic processes: Part I 


dissipative, 118 

operator which maximally solves the 
martingale problem, 118, 140, 228 
Ornstein-Uhlenbeck process, 98, 102, 200, 
201, 210 

orthogonal projection, 340 
oscillator process, 98, 99 
outer measure, 363, 364 

partial reward, 40 
partition, 4 
path, 373 
path space, 117 

pathwise solutions to SDE, 288, 289 
unique, 291, 292 

pathwise solutions to SDE’s, 244 
payoff process 
discounted, 193 

PDE for bond price in the Vasicek model, 
213 

pe-measure, 362 
persistent state, 47 
pinned Brownian motion, 98 
Poisson process, 26, 27, 29, 36, 89, 159 
Polish space, 15, 90, 123, 334, 335, 360, 
361, 366 
portfolio 

delta hedge, 190 
positive state, 47 

positive-definite function, 297, 302, 305, 
314 

positive-definite matrix, 90, 96, 197 
positivity preserving operators, 345 
pre-measure, 363, 364 
predictable process, 20, 193, 418 
probability kernel, 399, 408 
probability measure, 1 
probability space, 1 
process 

Gaussian, 200, 203 
increasing, 21 
predictable, 20 

process adapted to filtration, 374 
process of class (DL), 20, 21, 148, 149, 
161, 409-411, 420, 421 
progressively measurable process, 377 
Prohorov set, 72, 335, 337-339 
projective system of probability measures, 
13, 121, 360 

projective system of probability spaces, 
125 

propagator 
backward, 197 


quadratic covariation process, 249, 264, 
279 

quadratic variation process, 253 

Radon-Nikodym derivative, 11, 408 
Radon-Nikodym theorem, 4, 78, 408 
random walk, 58 
realization, 25, 373 
recurrent Markov chain, 48 
recurrent state, 47 

recurrent symmetric random walk, 55 
reference measure, 80, 81, 83 
reflected Brownian motion, 228 
renewal function, 35 
renewal process, 35, 40 
renewal-reward process, 39, 40 
renewal-reward theorem, 41 
resolvent family, 122 
return time, 55 
reward 
initial, 40 
partial, 40 
terminal, 40 
reward function, 40 
Riemann-Stieltjes integral, 364 
Riesz representation theorem, 295, 296, 
305 

right closure of filtration, 109 
right-continuous filtration, 374 
right-continuous paths, 19 
ring of subsets, 361 
risk-neutral measure, 193, 209 
risk-neutral probability measure, 192 
running maximum, 23 

sample path, 25 
sample path space, 11, 25 
sample space, 25 
semi-martingale, 419 
semi-ring, 364 

semi-ring of subsets, 361, 362 
semigroup 
Feller, 264 
Feller-Dynkin, 264 
shift operator, 109, 117 
Skorohod space, 117, 122, 128 
Skorohod-Dudley-Wichura representation 
theorem, 283, 286 
Souslin space, 90, 361, 365, 366 
space-homogeneous process, 29 
spectral radius, 303 
standard Brownian motion, 70 
state 

non-null, 47 
null, 47 


240 


Download free eBooks at bookboon.com 


Index 


Advanced stochastic processes: Part I 


persistent, 47 
positive, 47 
recurrent, 47 

state space, 11, 17, 79, 117, 400, 406 
discrete, 25 

state variable, 11, 25, 117 
state variables, 125 
stateitransient, 47 
stationary distribution, 25, 51 
stationary measure, 204 
stationary process, 11 
step functions with unit jumps, 159 
Stieltjes measure, 364 
Stirling’s formula, 54 
stochastic differential equation, 182 
stochastic integral, 102, 253 
stochastic process, 10 
stochastic variable, 11, 371 
stochastically continuous process, 159 
stochastically equivalent processes, 374 
stopped filtration, 377 
stopping time, 18, 20, 44, 58, 64, 68, 112, 
252, 374-377, 381, 382, 405 
discrete, 19 
terminal, 18, 24 

strong law of large numbers, 41, 76, 155, 
340, 344 

strong law of large numbers (SLLN), 38 
strong Markov process, 102, 119, 121, 140, 
406 

strong Markov property, 44, 48, 113 
strong solution to SDE, 244 
strong solutions to SDE 
unique, 244 

strong time-dependent Markov property, 
113, 120 

strongly continuous Feller semigroup, 113 
sub-martingale, 378, 381, 384 
sub-probability kernel, 406 
sub-probability measure, 1 
submartingale, 17, 20, 227 
submartingale convergence theorem, 158 
submartingale of class (DL), 421 
super-martingale, 378 
supermartingale, 17, 20 

Tanaka’s example, 292 
terminal reward, 40 
terminal stopping time, 18, 24, 83 
theorem 

Ito representation, 274 
Kolmogorov’s extension, 278 
martingale representation, 275 
of Arzela-Ascoli, 72, 73 
of Bochner, 90, 304, 308 


of Doob-Meyer, 20 
of Dynkin-Hunt, 397 
of Fernique, 221 
of Fubini, 199, 330 
of Girsanov, 277, 280 
of Helly, 334 
of Komlos, 409 
of Levy, 253, 270, 290 
of Prohorov, 72 
of Radon-Nikodym, 290 
of Riemann-Lebesgue, 300 
of Scheffe, 39, 278, 369 
of Schoenberg, 314 
of Stone-Weierstrass, 301, 305 
S kor oho d- D udley-Wichur a 
representation, 283, 286 
time, 11 
time change, 19 
stochastic, 19 

time-dependent Markov process, 200, 203 
time-homogeneous process, 11, 29 
time-homogeneous transition probability, 
25 

time-homogenous Markov process, 407 
topology of uniform convergence on 
compact subsets, 310 

tower property of conditional expectation, 
5 

transient non-symmetric random walk, 57 
transient state, 47 

transient symmetric random walk, 55 
transition function, 119 
transition matrix, 51 
translation operator, 11, 25, 109, 117, 

400, 406 

translation variables, 125 

uniformly distributed random variable, 
394 

uniformly integrable family, 5, 6, 20, 39, 
369, 388 

uniformly integrable martingale, 389 
uniformly integrable sequence, 385 
unique pathwise solutions to SDE, 244 
uniqueness of the Doob-Meyer 
decomposition, 417 
unitary operator, 340, 342 
upcrossing inequality, 156, 157, 383 
upcrossing times, 156 
upcrossings, 156 

vague convergence, 371 
vague topology, 310, 334 
vaguely continuous convolution semigroup 
of measures, 315 


241 


Download free eBooks at bookboon.com 


Advanced stochastic processes: Part I 


Index 


vaguely continuous convolution semigroup 
of probability measures, 389, 390 
Vasicek model, 204, 210 
volatility, 188 

von Neumann’s ergodic theorem, 340 

Wald’s equation, 36 

weak convergence, 325 

weak law of large numbers, 75, 340 

weak solutions, 264 

weak solutions to SDE’s, 244, 277, 280, 
288 

unique, 265, 292 

weak solutions to stochastic differential 
equations, 265 
weak topology, 310 
weak*-topology, 334 
weakly compact set, 338, 339 
Wiener process, 98 



Brain power 


By 2020, wind could provide one-tenth of our planet’s 
electricity needs. Already today, SKF’s innovative know¬ 
how is crucial to running a large proportion of the 
world’s wind turbines. 

Up to 25 7o of the generating costs relate to mainte¬ 
nance. These can be reduced dramatically thanks to our 
(^sterns for on-line condition monitoring and automatic 
lul|kation. We help make it more economical to create 
cleanSkdneaper energy out of thin air. 

By sh?fe|ig our experience, expertise, and creativity, 
industries can boost performance beyond expectations. 

Therefore we need the best employees who can 
kneet this challenge! 


Power of Knowledge Engineering 


Plug into The Power of Knowledge Engineering. 
Visit us at www.skf.com/knowledge 


242 

Download free eBooks at bookboon.com 



To See Part 2 download 
Advanced stochastic processes: Part 2 


Download free eBooks at bookboon.com